00:00:00.001 Started by upstream project "autotest-per-patch" build number 120652 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.061 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.085 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.104 Using shallow fetch with depth 1 00:00:00.104 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.104 > git --version # timeout=10 00:00:00.132 > git --version # 'git version 2.39.2' 00:00:00.132 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.133 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.133 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.904 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.914 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.925 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:03.925 > git config core.sparsecheckout # timeout=10 00:00:03.935 > git read-tree -mu HEAD # timeout=10 00:00:03.949 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:03.965 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:03.965 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:04.065 [Pipeline] Start of Pipeline 00:00:04.080 [Pipeline] library 00:00:04.082 Loading library shm_lib@master 00:00:04.082 Library shm_lib@master is cached. Copying from home. 00:00:04.099 [Pipeline] node 00:00:04.113 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.114 [Pipeline] { 00:00:04.123 [Pipeline] catchError 00:00:04.124 [Pipeline] { 00:00:04.134 [Pipeline] wrap 00:00:04.140 [Pipeline] { 00:00:04.146 [Pipeline] stage 00:00:04.147 [Pipeline] { (Prologue) 00:00:04.305 [Pipeline] sh 00:00:04.587 + logger -p user.info -t JENKINS-CI 00:00:04.606 [Pipeline] echo 00:00:04.607 Node: GP11 00:00:04.616 [Pipeline] sh 00:00:04.916 [Pipeline] setCustomBuildProperty 00:00:04.928 [Pipeline] echo 00:00:04.930 Cleanup processes 00:00:04.934 [Pipeline] sh 00:00:05.211 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.211 43958 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.224 [Pipeline] sh 00:00:05.507 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.507 ++ grep -v 'sudo pgrep' 00:00:05.507 ++ awk '{print $1}' 00:00:05.507 + sudo kill -9 00:00:05.507 + true 00:00:05.522 [Pipeline] cleanWs 00:00:05.531 [WS-CLEANUP] Deleting project workspace... 00:00:05.531 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.538 [WS-CLEANUP] done 00:00:05.543 [Pipeline] setCustomBuildProperty 00:00:05.560 [Pipeline] sh 00:00:05.843 + sudo git config --global --replace-all safe.directory '*' 00:00:05.903 [Pipeline] nodesByLabel 00:00:05.906 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.915 [Pipeline] httpRequest 00:00:05.920 HttpMethod: GET 00:00:05.920 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:05.927 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:05.930 Response Code: HTTP/1.1 200 OK 00:00:05.931 Success: Status code 200 is in the accepted range: 200,404 00:00:05.931 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:06.599 [Pipeline] sh 00:00:06.882 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:06.903 [Pipeline] httpRequest 00:00:06.908 HttpMethod: GET 00:00:06.908 URL: http://10.211.164.101/packages/spdk_c064dc58412c9533ec0f3c28b7f7bea72245f322.tar.gz 00:00:06.909 Sending request to url: http://10.211.164.101/packages/spdk_c064dc58412c9533ec0f3c28b7f7bea72245f322.tar.gz 00:00:06.913 Response Code: HTTP/1.1 200 OK 00:00:06.913 Success: Status code 200 is in the accepted range: 200,404 00:00:06.914 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c064dc58412c9533ec0f3c28b7f7bea72245f322.tar.gz 00:00:20.974 [Pipeline] sh 00:00:21.255 + tar --no-same-owner -xf spdk_c064dc58412c9533ec0f3c28b7f7bea72245f322.tar.gz 00:00:23.797 [Pipeline] sh 00:00:24.080 + git -C spdk log --oneline -n5 00:00:24.080 c064dc584 trace: rename trace_event's poller_id to owner_id 00:00:24.080 23f700383 trace: add concept of "owner" to trace files 00:00:24.080 67f328f92 trace: rename "per_lcore_history" to just "data" 00:00:24.080 38dca48f0 libvfio-user: update submodule to point to `spdk` branch 00:00:24.080 7a71abf69 fuzz/llvm_vfio_fuzz: limit length of generated data to `bytes_per_cmd` 00:00:24.094 [Pipeline] } 00:00:24.113 [Pipeline] // stage 00:00:24.122 [Pipeline] stage 00:00:24.124 [Pipeline] { (Prepare) 00:00:24.145 [Pipeline] writeFile 00:00:24.166 [Pipeline] sh 00:00:24.451 + logger -p user.info -t JENKINS-CI 00:00:24.466 [Pipeline] sh 00:00:24.751 + logger -p user.info -t JENKINS-CI 00:00:24.763 [Pipeline] sh 00:00:25.046 + cat autorun-spdk.conf 00:00:25.046 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.046 SPDK_TEST_NVMF=1 00:00:25.046 SPDK_TEST_NVME_CLI=1 00:00:25.046 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.046 SPDK_TEST_NVMF_NICS=e810 00:00:25.046 SPDK_TEST_VFIOUSER=1 00:00:25.046 SPDK_RUN_UBSAN=1 00:00:25.046 NET_TYPE=phy 00:00:25.054 RUN_NIGHTLY=0 00:00:25.060 [Pipeline] readFile 00:00:25.087 [Pipeline] withEnv 00:00:25.089 [Pipeline] { 00:00:25.104 [Pipeline] sh 00:00:25.388 + set -ex 00:00:25.388 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:25.388 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:25.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.388 ++ SPDK_TEST_NVMF=1 00:00:25.388 ++ SPDK_TEST_NVME_CLI=1 00:00:25.388 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.388 ++ SPDK_TEST_NVMF_NICS=e810 00:00:25.388 ++ SPDK_TEST_VFIOUSER=1 00:00:25.388 ++ SPDK_RUN_UBSAN=1 00:00:25.389 ++ NET_TYPE=phy 00:00:25.389 ++ RUN_NIGHTLY=0 00:00:25.389 + case $SPDK_TEST_NVMF_NICS in 00:00:25.389 + DRIVERS=ice 00:00:25.389 + [[ tcp == \r\d\m\a ]] 00:00:25.389 + [[ -n ice ]] 00:00:25.389 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:25.389 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:25.389 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:25.389 rmmod: ERROR: Module irdma is not currently loaded 00:00:25.389 rmmod: ERROR: Module i40iw is not currently loaded 00:00:25.389 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:25.389 + true 00:00:25.389 + for D in $DRIVERS 00:00:25.389 + sudo modprobe ice 00:00:25.389 + exit 0 00:00:25.398 [Pipeline] } 00:00:25.415 [Pipeline] // withEnv 00:00:25.420 [Pipeline] } 00:00:25.436 [Pipeline] // stage 00:00:25.446 [Pipeline] catchError 00:00:25.447 [Pipeline] { 00:00:25.463 [Pipeline] timeout 00:00:25.463 Timeout set to expire in 40 min 00:00:25.465 [Pipeline] { 00:00:25.481 [Pipeline] stage 00:00:25.483 [Pipeline] { (Tests) 00:00:25.499 [Pipeline] sh 00:00:25.780 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:25.780 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:25.780 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:25.780 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:25.780 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.780 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:25.780 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:25.780 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:25.780 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:25.780 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:25.780 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:25.780 + source /etc/os-release 00:00:25.780 ++ NAME='Fedora Linux' 00:00:25.780 ++ VERSION='38 (Cloud Edition)' 00:00:25.780 ++ ID=fedora 00:00:25.780 ++ VERSION_ID=38 00:00:25.780 ++ VERSION_CODENAME= 00:00:25.780 ++ PLATFORM_ID=platform:f38 00:00:25.780 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:25.780 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:25.780 ++ LOGO=fedora-logo-icon 00:00:25.780 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:25.780 ++ HOME_URL=https://fedoraproject.org/ 00:00:25.780 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:25.780 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:25.780 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:25.780 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:25.780 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:25.780 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:25.780 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:25.780 ++ SUPPORT_END=2024-05-14 00:00:25.780 ++ VARIANT='Cloud Edition' 00:00:25.780 ++ VARIANT_ID=cloud 00:00:25.780 + uname -a 00:00:25.780 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:25.780 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:26.716 Hugepages 00:00:26.716 node hugesize free / total 00:00:26.716 node0 1048576kB 0 / 0 00:00:26.716 node0 2048kB 0 / 0 00:00:26.716 node1 1048576kB 0 / 0 00:00:26.716 node1 2048kB 0 / 0 00:00:26.716 00:00:26.716 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:26.716 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:26.716 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:26.716 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:26.975 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:26.975 + rm -f /tmp/spdk-ld-path 00:00:26.975 + source autorun-spdk.conf 00:00:26.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.975 ++ SPDK_TEST_NVMF=1 00:00:26.975 ++ SPDK_TEST_NVME_CLI=1 00:00:26.975 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.975 ++ SPDK_TEST_NVMF_NICS=e810 00:00:26.975 ++ SPDK_TEST_VFIOUSER=1 00:00:26.975 ++ SPDK_RUN_UBSAN=1 00:00:26.975 ++ NET_TYPE=phy 00:00:26.975 ++ RUN_NIGHTLY=0 00:00:26.975 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:26.975 + [[ -n '' ]] 00:00:26.975 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:26.975 + for M in /var/spdk/build-*-manifest.txt 00:00:26.975 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:26.975 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:26.975 + for M in /var/spdk/build-*-manifest.txt 00:00:26.975 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:26.975 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:26.975 ++ uname 00:00:26.975 + [[ Linux == \L\i\n\u\x ]] 00:00:26.975 + sudo dmesg -T 00:00:26.975 + sudo dmesg --clear 00:00:26.975 + dmesg_pid=44623 00:00:26.975 + [[ Fedora Linux == FreeBSD ]] 00:00:26.975 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:26.975 + sudo dmesg -Tw 00:00:26.975 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:26.975 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:26.975 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:26.975 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:26.975 + [[ -x /usr/src/fio-static/fio ]] 00:00:26.975 + export FIO_BIN=/usr/src/fio-static/fio 00:00:26.975 + FIO_BIN=/usr/src/fio-static/fio 00:00:26.975 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:26.975 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:26.975 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:26.975 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:26.975 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:26.975 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:26.975 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:26.975 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:26.975 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:26.975 Test configuration: 00:00:26.975 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.975 SPDK_TEST_NVMF=1 00:00:26.975 SPDK_TEST_NVME_CLI=1 00:00:26.975 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.975 SPDK_TEST_NVMF_NICS=e810 00:00:26.975 SPDK_TEST_VFIOUSER=1 00:00:26.975 SPDK_RUN_UBSAN=1 00:00:26.975 NET_TYPE=phy 00:00:26.975 RUN_NIGHTLY=0 03:14:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:26.975 03:14:04 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:26.975 03:14:04 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:26.975 03:14:04 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:26.975 03:14:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:26.975 03:14:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:26.975 03:14:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:26.975 03:14:04 -- paths/export.sh@5 -- $ export PATH 00:00:26.975 03:14:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:26.975 03:14:04 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:26.975 03:14:04 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:26.975 03:14:04 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713489244.XXXXXX 00:00:26.975 03:14:04 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713489244.HXaETW 00:00:26.975 03:14:04 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:26.975 03:14:04 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:26.975 03:14:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:26.975 03:14:04 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:26.975 03:14:04 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:26.975 03:14:04 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:26.975 03:14:04 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:26.975 03:14:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:26.975 03:14:04 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:26.975 03:14:04 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:26.975 03:14:04 -- pm/common@17 -- $ local monitor 00:00:26.975 03:14:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:26.975 03:14:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=44657 00:00:26.975 03:14:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:26.975 03:14:04 -- pm/common@21 -- $ date +%s 00:00:26.975 03:14:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=44659 00:00:26.975 03:14:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:26.975 03:14:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=44662 00:00:26.975 03:14:04 -- pm/common@21 -- $ date +%s 00:00:26.975 03:14:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:26.975 03:14:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=44665 00:00:26.975 03:14:04 -- pm/common@21 -- $ date +%s 00:00:26.975 03:14:04 -- pm/common@26 -- $ sleep 1 00:00:26.975 03:14:04 -- pm/common@21 -- $ date +%s 00:00:26.975 03:14:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713489244 00:00:26.975 03:14:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713489244 00:00:26.975 03:14:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713489244 00:00:26.975 03:14:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713489244 00:00:26.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713489244_collect-vmstat.pm.log 00:00:26.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713489244_collect-cpu-temp.pm.log 00:00:26.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713489244_collect-cpu-load.pm.log 00:00:26.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713489244_collect-bmc-pm.bmc.pm.log 00:00:28.359 03:14:05 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:28.359 03:14:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:28.359 03:14:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:28.359 03:14:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:28.359 03:14:05 -- spdk/autobuild.sh@16 -- $ date -u 00:00:28.359 Fri Apr 19 01:14:05 AM UTC 2024 00:00:28.359 03:14:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:28.359 v24.05-pre-413-gc064dc584 00:00:28.359 03:14:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:28.359 03:14:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:28.359 03:14:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:28.359 03:14:05 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:28.359 03:14:05 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:28.359 03:14:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:28.359 ************************************ 00:00:28.359 START TEST ubsan 00:00:28.359 ************************************ 00:00:28.359 03:14:05 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:28.359 using ubsan 00:00:28.359 00:00:28.359 real 0m0.000s 00:00:28.359 user 0m0.000s 00:00:28.359 sys 0m0.000s 00:00:28.359 03:14:05 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:28.359 03:14:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:28.359 ************************************ 00:00:28.359 END TEST ubsan 00:00:28.359 ************************************ 00:00:28.359 03:14:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:28.359 03:14:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:28.359 03:14:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:28.359 03:14:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:28.359 03:14:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:28.359 03:14:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:28.359 03:14:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:28.359 03:14:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:28.359 03:14:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:28.359 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:28.359 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:28.617 Using 'verbs' RDMA provider 00:00:39.191 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:49.188 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:49.188 Creating mk/config.mk...done. 00:00:49.188 Creating mk/cc.flags.mk...done. 00:00:49.188 Type 'make' to build. 00:00:49.188 03:14:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:49.188 03:14:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:49.188 03:14:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:49.188 03:14:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.188 ************************************ 00:00:49.188 START TEST make 00:00:49.188 ************************************ 00:00:49.188 03:14:26 -- common/autotest_common.sh@1111 -- $ make -j48 00:00:49.188 make[1]: Nothing to be done for 'all'. 00:00:50.615 The Meson build system 00:00:50.615 Version: 1.3.1 00:00:50.615 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:50.615 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:50.615 Build type: native build 00:00:50.615 Project name: libvfio-user 00:00:50.615 Project version: 0.0.1 00:00:50.615 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:50.615 C linker for the host machine: cc ld.bfd 2.39-16 00:00:50.615 Host machine cpu family: x86_64 00:00:50.615 Host machine cpu: x86_64 00:00:50.615 Run-time dependency threads found: YES 00:00:50.615 Library dl found: YES 00:00:50.615 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:50.615 Run-time dependency json-c found: YES 0.17 00:00:50.615 Run-time dependency cmocka found: YES 1.1.7 00:00:50.615 Program pytest-3 found: NO 00:00:50.615 Program flake8 found: NO 00:00:50.615 Program misspell-fixer found: NO 00:00:50.615 Program restructuredtext-lint found: NO 00:00:50.615 Program valgrind found: YES (/usr/bin/valgrind) 00:00:50.615 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:50.615 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:50.615 Compiler for C supports arguments -Wwrite-strings: YES 00:00:50.615 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:50.615 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:50.615 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:50.615 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:50.615 Build targets in project: 8 00:00:50.615 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:50.615 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:50.615 00:00:50.615 libvfio-user 0.0.1 00:00:50.615 00:00:50.615 User defined options 00:00:50.615 buildtype : debug 00:00:50.615 default_library: shared 00:00:50.615 libdir : /usr/local/lib 00:00:50.615 00:00:50.615 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:51.194 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:51.468 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:51.468 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:51.468 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:51.468 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:51.468 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:51.468 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:51.468 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:51.468 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:51.468 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:51.733 [10/37] Compiling C object samples/null.p/null.c.o 00:00:51.733 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:51.733 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:51.733 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:51.733 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:51.733 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:51.733 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:51.733 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:51.733 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:51.733 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:51.733 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:51.733 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:51.733 [22/37] Compiling C object samples/server.p/server.c.o 00:00:51.733 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:51.733 [24/37] Compiling C object samples/client.p/client.c.o 00:00:51.733 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:51.733 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:51.733 [27/37] Linking target samples/client 00:00:51.997 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:51.997 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:51.997 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:00:51.997 [31/37] Linking target test/unit_tests 00:00:52.259 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:52.259 [33/37] Linking target samples/server 00:00:52.259 [34/37] Linking target samples/null 00:00:52.259 [35/37] Linking target samples/gpio-pci-idio-16 00:00:52.259 [36/37] Linking target samples/shadow_ioeventfd_server 00:00:52.259 [37/37] Linking target samples/lspci 00:00:52.259 INFO: autodetecting backend as ninja 00:00:52.259 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:52.259 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:53.213 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:53.213 ninja: no work to do. 00:00:57.423 The Meson build system 00:00:57.423 Version: 1.3.1 00:00:57.423 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:00:57.423 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:00:57.423 Build type: native build 00:00:57.423 Program cat found: YES (/usr/bin/cat) 00:00:57.423 Project name: DPDK 00:00:57.423 Project version: 23.11.0 00:00:57.423 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:57.423 C linker for the host machine: cc ld.bfd 2.39-16 00:00:57.423 Host machine cpu family: x86_64 00:00:57.423 Host machine cpu: x86_64 00:00:57.423 Message: ## Building in Developer Mode ## 00:00:57.423 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:57.423 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:00:57.423 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:00:57.423 Program python3 found: YES (/usr/bin/python3) 00:00:57.423 Program cat found: YES (/usr/bin/cat) 00:00:57.423 Compiler for C supports arguments -march=native: YES 00:00:57.423 Checking for size of "void *" : 8 00:00:57.423 Checking for size of "void *" : 8 (cached) 00:00:57.423 Library m found: YES 00:00:57.423 Library numa found: YES 00:00:57.423 Has header "numaif.h" : YES 00:00:57.423 Library fdt found: NO 00:00:57.423 Library execinfo found: NO 00:00:57.423 Has header "execinfo.h" : YES 00:00:57.423 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:57.423 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:57.423 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:57.423 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:57.423 Run-time dependency openssl found: YES 3.0.9 00:00:57.423 Run-time dependency libpcap found: YES 1.10.4 00:00:57.423 Has header "pcap.h" with dependency libpcap: YES 00:00:57.423 Compiler for C supports arguments -Wcast-qual: YES 00:00:57.423 Compiler for C supports arguments -Wdeprecated: YES 00:00:57.423 Compiler for C supports arguments -Wformat: YES 00:00:57.423 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:57.423 Compiler for C supports arguments -Wformat-security: NO 00:00:57.423 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:57.423 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:57.423 Compiler for C supports arguments -Wnested-externs: YES 00:00:57.423 Compiler for C supports arguments -Wold-style-definition: YES 00:00:57.423 Compiler for C supports arguments -Wpointer-arith: YES 00:00:57.423 Compiler for C supports arguments -Wsign-compare: YES 00:00:57.423 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:57.423 Compiler for C supports arguments -Wundef: YES 00:00:57.423 Compiler for C supports arguments -Wwrite-strings: YES 00:00:57.423 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:57.423 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:57.423 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:57.423 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:57.423 Program objdump found: YES (/usr/bin/objdump) 00:00:57.423 Compiler for C supports arguments -mavx512f: YES 00:00:57.423 Checking if "AVX512 checking" compiles: YES 00:00:57.423 Fetching value of define "__SSE4_2__" : 1 00:00:57.423 Fetching value of define "__AES__" : 1 00:00:57.423 Fetching value of define "__AVX__" : 1 00:00:57.423 Fetching value of define "__AVX2__" : (undefined) 00:00:57.423 Fetching value of define "__AVX512BW__" : (undefined) 00:00:57.423 Fetching value of define "__AVX512CD__" : (undefined) 00:00:57.423 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:57.423 Fetching value of define "__AVX512F__" : (undefined) 00:00:57.423 Fetching value of define "__AVX512VL__" : (undefined) 00:00:57.423 Fetching value of define "__PCLMUL__" : 1 00:00:57.423 Fetching value of define "__RDRND__" : 1 00:00:57.423 Fetching value of define "__RDSEED__" : (undefined) 00:00:57.423 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:57.423 Fetching value of define "__znver1__" : (undefined) 00:00:57.423 Fetching value of define "__znver2__" : (undefined) 00:00:57.423 Fetching value of define "__znver3__" : (undefined) 00:00:57.423 Fetching value of define "__znver4__" : (undefined) 00:00:57.423 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:57.423 Message: lib/log: Defining dependency "log" 00:00:57.423 Message: lib/kvargs: Defining dependency "kvargs" 00:00:57.423 Message: lib/telemetry: Defining dependency "telemetry" 00:00:57.423 Checking for function "getentropy" : NO 00:00:57.423 Message: lib/eal: Defining dependency "eal" 00:00:57.423 Message: lib/ring: Defining dependency "ring" 00:00:57.423 Message: lib/rcu: Defining dependency "rcu" 00:00:57.423 Message: lib/mempool: Defining dependency "mempool" 00:00:57.423 Message: lib/mbuf: Defining dependency "mbuf" 00:00:57.423 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:57.423 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:57.423 Compiler for C supports arguments -mpclmul: YES 00:00:57.423 Compiler for C supports arguments -maes: YES 00:00:57.423 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:57.423 Compiler for C supports arguments -mavx512bw: YES 00:00:57.423 Compiler for C supports arguments -mavx512dq: YES 00:00:57.423 Compiler for C supports arguments -mavx512vl: YES 00:00:57.423 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:57.423 Compiler for C supports arguments -mavx2: YES 00:00:57.423 Compiler for C supports arguments -mavx: YES 00:00:57.423 Message: lib/net: Defining dependency "net" 00:00:57.423 Message: lib/meter: Defining dependency "meter" 00:00:57.423 Message: lib/ethdev: Defining dependency "ethdev" 00:00:57.423 Message: lib/pci: Defining dependency "pci" 00:00:57.423 Message: lib/cmdline: Defining dependency "cmdline" 00:00:57.423 Message: lib/hash: Defining dependency "hash" 00:00:57.423 Message: lib/timer: Defining dependency "timer" 00:00:57.423 Message: lib/compressdev: Defining dependency "compressdev" 00:00:57.423 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:57.423 Message: lib/dmadev: Defining dependency "dmadev" 00:00:57.423 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:57.423 Message: lib/power: Defining dependency "power" 00:00:57.423 Message: lib/reorder: Defining dependency "reorder" 00:00:57.423 Message: lib/security: Defining dependency "security" 00:00:57.423 Has header "linux/userfaultfd.h" : YES 00:00:57.423 Has header "linux/vduse.h" : YES 00:00:57.423 Message: lib/vhost: Defining dependency "vhost" 00:00:57.423 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:57.423 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:57.423 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:57.423 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:57.423 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:00:57.423 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:00:57.423 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:00:57.423 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:00:57.423 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:00:57.423 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:00:57.423 Program doxygen found: YES (/usr/bin/doxygen) 00:00:57.423 Configuring doxy-api-html.conf using configuration 00:00:57.423 Configuring doxy-api-man.conf using configuration 00:00:57.423 Program mandb found: YES (/usr/bin/mandb) 00:00:57.423 Program sphinx-build found: NO 00:00:57.423 Configuring rte_build_config.h using configuration 00:00:57.423 Message: 00:00:57.423 ================= 00:00:57.423 Applications Enabled 00:00:57.423 ================= 00:00:57.423 00:00:57.423 apps: 00:00:57.423 00:00:57.423 00:00:57.423 Message: 00:00:57.423 ================= 00:00:57.423 Libraries Enabled 00:00:57.423 ================= 00:00:57.423 00:00:57.423 libs: 00:00:57.423 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:57.423 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:00:57.423 cryptodev, dmadev, power, reorder, security, vhost, 00:00:57.423 00:00:57.423 Message: 00:00:57.423 =============== 00:00:57.423 Drivers Enabled 00:00:57.423 =============== 00:00:57.423 00:00:57.423 common: 00:00:57.423 00:00:57.423 bus: 00:00:57.423 pci, vdev, 00:00:57.423 mempool: 00:00:57.423 ring, 00:00:57.423 dma: 00:00:57.423 00:00:57.423 net: 00:00:57.423 00:00:57.423 crypto: 00:00:57.423 00:00:57.423 compress: 00:00:57.423 00:00:57.424 vdpa: 00:00:57.424 00:00:57.424 00:00:57.424 Message: 00:00:57.424 ================= 00:00:57.424 Content Skipped 00:00:57.424 ================= 00:00:57.424 00:00:57.424 apps: 00:00:57.424 dumpcap: explicitly disabled via build config 00:00:57.424 graph: explicitly disabled via build config 00:00:57.424 pdump: explicitly disabled via build config 00:00:57.424 proc-info: explicitly disabled via build config 00:00:57.424 test-acl: explicitly disabled via build config 00:00:57.424 test-bbdev: explicitly disabled via build config 00:00:57.424 test-cmdline: explicitly disabled via build config 00:00:57.424 test-compress-perf: explicitly disabled via build config 00:00:57.424 test-crypto-perf: explicitly disabled via build config 00:00:57.424 test-dma-perf: explicitly disabled via build config 00:00:57.424 test-eventdev: explicitly disabled via build config 00:00:57.424 test-fib: explicitly disabled via build config 00:00:57.424 test-flow-perf: explicitly disabled via build config 00:00:57.424 test-gpudev: explicitly disabled via build config 00:00:57.424 test-mldev: explicitly disabled via build config 00:00:57.424 test-pipeline: explicitly disabled via build config 00:00:57.424 test-pmd: explicitly disabled via build config 00:00:57.424 test-regex: explicitly disabled via build config 00:00:57.424 test-sad: explicitly disabled via build config 00:00:57.424 test-security-perf: explicitly disabled via build config 00:00:57.424 00:00:57.424 libs: 00:00:57.424 metrics: explicitly disabled via build config 00:00:57.424 acl: explicitly disabled via build config 00:00:57.424 bbdev: explicitly disabled via build config 00:00:57.424 bitratestats: explicitly disabled via build config 00:00:57.424 bpf: explicitly disabled via build config 00:00:57.424 cfgfile: explicitly disabled via build config 00:00:57.424 distributor: explicitly disabled via build config 00:00:57.424 efd: explicitly disabled via build config 00:00:57.424 eventdev: explicitly disabled via build config 00:00:57.424 dispatcher: explicitly disabled via build config 00:00:57.424 gpudev: explicitly disabled via build config 00:00:57.424 gro: explicitly disabled via build config 00:00:57.424 gso: explicitly disabled via build config 00:00:57.424 ip_frag: explicitly disabled via build config 00:00:57.424 jobstats: explicitly disabled via build config 00:00:57.424 latencystats: explicitly disabled via build config 00:00:57.424 lpm: explicitly disabled via build config 00:00:57.424 member: explicitly disabled via build config 00:00:57.424 pcapng: explicitly disabled via build config 00:00:57.424 rawdev: explicitly disabled via build config 00:00:57.424 regexdev: explicitly disabled via build config 00:00:57.424 mldev: explicitly disabled via build config 00:00:57.424 rib: explicitly disabled via build config 00:00:57.424 sched: explicitly disabled via build config 00:00:57.424 stack: explicitly disabled via build config 00:00:57.424 ipsec: explicitly disabled via build config 00:00:57.424 pdcp: explicitly disabled via build config 00:00:57.424 fib: explicitly disabled via build config 00:00:57.424 port: explicitly disabled via build config 00:00:57.424 pdump: explicitly disabled via build config 00:00:57.424 table: explicitly disabled via build config 00:00:57.424 pipeline: explicitly disabled via build config 00:00:57.424 graph: explicitly disabled via build config 00:00:57.424 node: explicitly disabled via build config 00:00:57.424 00:00:57.424 drivers: 00:00:57.424 common/cpt: not in enabled drivers build config 00:00:57.424 common/dpaax: not in enabled drivers build config 00:00:57.424 common/iavf: not in enabled drivers build config 00:00:57.424 common/idpf: not in enabled drivers build config 00:00:57.424 common/mvep: not in enabled drivers build config 00:00:57.424 common/octeontx: not in enabled drivers build config 00:00:57.424 bus/auxiliary: not in enabled drivers build config 00:00:57.424 bus/cdx: not in enabled drivers build config 00:00:57.424 bus/dpaa: not in enabled drivers build config 00:00:57.424 bus/fslmc: not in enabled drivers build config 00:00:57.424 bus/ifpga: not in enabled drivers build config 00:00:57.424 bus/platform: not in enabled drivers build config 00:00:57.424 bus/vmbus: not in enabled drivers build config 00:00:57.424 common/cnxk: not in enabled drivers build config 00:00:57.424 common/mlx5: not in enabled drivers build config 00:00:57.424 common/nfp: not in enabled drivers build config 00:00:57.424 common/qat: not in enabled drivers build config 00:00:57.424 common/sfc_efx: not in enabled drivers build config 00:00:57.424 mempool/bucket: not in enabled drivers build config 00:00:57.424 mempool/cnxk: not in enabled drivers build config 00:00:57.424 mempool/dpaa: not in enabled drivers build config 00:00:57.424 mempool/dpaa2: not in enabled drivers build config 00:00:57.424 mempool/octeontx: not in enabled drivers build config 00:00:57.424 mempool/stack: not in enabled drivers build config 00:00:57.424 dma/cnxk: not in enabled drivers build config 00:00:57.424 dma/dpaa: not in enabled drivers build config 00:00:57.424 dma/dpaa2: not in enabled drivers build config 00:00:57.424 dma/hisilicon: not in enabled drivers build config 00:00:57.424 dma/idxd: not in enabled drivers build config 00:00:57.424 dma/ioat: not in enabled drivers build config 00:00:57.424 dma/skeleton: not in enabled drivers build config 00:00:57.424 net/af_packet: not in enabled drivers build config 00:00:57.424 net/af_xdp: not in enabled drivers build config 00:00:57.424 net/ark: not in enabled drivers build config 00:00:57.424 net/atlantic: not in enabled drivers build config 00:00:57.424 net/avp: not in enabled drivers build config 00:00:57.424 net/axgbe: not in enabled drivers build config 00:00:57.424 net/bnx2x: not in enabled drivers build config 00:00:57.424 net/bnxt: not in enabled drivers build config 00:00:57.424 net/bonding: not in enabled drivers build config 00:00:57.424 net/cnxk: not in enabled drivers build config 00:00:57.424 net/cpfl: not in enabled drivers build config 00:00:57.424 net/cxgbe: not in enabled drivers build config 00:00:57.424 net/dpaa: not in enabled drivers build config 00:00:57.424 net/dpaa2: not in enabled drivers build config 00:00:57.424 net/e1000: not in enabled drivers build config 00:00:57.424 net/ena: not in enabled drivers build config 00:00:57.424 net/enetc: not in enabled drivers build config 00:00:57.424 net/enetfec: not in enabled drivers build config 00:00:57.424 net/enic: not in enabled drivers build config 00:00:57.424 net/failsafe: not in enabled drivers build config 00:00:57.424 net/fm10k: not in enabled drivers build config 00:00:57.424 net/gve: not in enabled drivers build config 00:00:57.424 net/hinic: not in enabled drivers build config 00:00:57.424 net/hns3: not in enabled drivers build config 00:00:57.424 net/i40e: not in enabled drivers build config 00:00:57.424 net/iavf: not in enabled drivers build config 00:00:57.424 net/ice: not in enabled drivers build config 00:00:57.424 net/idpf: not in enabled drivers build config 00:00:57.424 net/igc: not in enabled drivers build config 00:00:57.424 net/ionic: not in enabled drivers build config 00:00:57.424 net/ipn3ke: not in enabled drivers build config 00:00:57.424 net/ixgbe: not in enabled drivers build config 00:00:57.424 net/mana: not in enabled drivers build config 00:00:57.424 net/memif: not in enabled drivers build config 00:00:57.424 net/mlx4: not in enabled drivers build config 00:00:57.424 net/mlx5: not in enabled drivers build config 00:00:57.424 net/mvneta: not in enabled drivers build config 00:00:57.424 net/mvpp2: not in enabled drivers build config 00:00:57.424 net/netvsc: not in enabled drivers build config 00:00:57.424 net/nfb: not in enabled drivers build config 00:00:57.424 net/nfp: not in enabled drivers build config 00:00:57.424 net/ngbe: not in enabled drivers build config 00:00:57.424 net/null: not in enabled drivers build config 00:00:57.424 net/octeontx: not in enabled drivers build config 00:00:57.424 net/octeon_ep: not in enabled drivers build config 00:00:57.424 net/pcap: not in enabled drivers build config 00:00:57.424 net/pfe: not in enabled drivers build config 00:00:57.424 net/qede: not in enabled drivers build config 00:00:57.424 net/ring: not in enabled drivers build config 00:00:57.424 net/sfc: not in enabled drivers build config 00:00:57.424 net/softnic: not in enabled drivers build config 00:00:57.424 net/tap: not in enabled drivers build config 00:00:57.424 net/thunderx: not in enabled drivers build config 00:00:57.424 net/txgbe: not in enabled drivers build config 00:00:57.424 net/vdev_netvsc: not in enabled drivers build config 00:00:57.424 net/vhost: not in enabled drivers build config 00:00:57.424 net/virtio: not in enabled drivers build config 00:00:57.424 net/vmxnet3: not in enabled drivers build config 00:00:57.424 raw/*: missing internal dependency, "rawdev" 00:00:57.424 crypto/armv8: not in enabled drivers build config 00:00:57.424 crypto/bcmfs: not in enabled drivers build config 00:00:57.424 crypto/caam_jr: not in enabled drivers build config 00:00:57.424 crypto/ccp: not in enabled drivers build config 00:00:57.424 crypto/cnxk: not in enabled drivers build config 00:00:57.424 crypto/dpaa_sec: not in enabled drivers build config 00:00:57.424 crypto/dpaa2_sec: not in enabled drivers build config 00:00:57.424 crypto/ipsec_mb: not in enabled drivers build config 00:00:57.424 crypto/mlx5: not in enabled drivers build config 00:00:57.424 crypto/mvsam: not in enabled drivers build config 00:00:57.424 crypto/nitrox: not in enabled drivers build config 00:00:57.424 crypto/null: not in enabled drivers build config 00:00:57.424 crypto/octeontx: not in enabled drivers build config 00:00:57.424 crypto/openssl: not in enabled drivers build config 00:00:57.424 crypto/scheduler: not in enabled drivers build config 00:00:57.424 crypto/uadk: not in enabled drivers build config 00:00:57.424 crypto/virtio: not in enabled drivers build config 00:00:57.424 compress/isal: not in enabled drivers build config 00:00:57.424 compress/mlx5: not in enabled drivers build config 00:00:57.424 compress/octeontx: not in enabled drivers build config 00:00:57.424 compress/zlib: not in enabled drivers build config 00:00:57.424 regex/*: missing internal dependency, "regexdev" 00:00:57.424 ml/*: missing internal dependency, "mldev" 00:00:57.424 vdpa/ifc: not in enabled drivers build config 00:00:57.424 vdpa/mlx5: not in enabled drivers build config 00:00:57.424 vdpa/nfp: not in enabled drivers build config 00:00:57.424 vdpa/sfc: not in enabled drivers build config 00:00:57.424 event/*: missing internal dependency, "eventdev" 00:00:57.424 baseband/*: missing internal dependency, "bbdev" 00:00:57.424 gpu/*: missing internal dependency, "gpudev" 00:00:57.424 00:00:57.424 00:00:57.684 Build targets in project: 85 00:00:57.684 00:00:57.684 DPDK 23.11.0 00:00:57.684 00:00:57.684 User defined options 00:00:57.684 buildtype : debug 00:00:57.684 default_library : shared 00:00:57.684 libdir : lib 00:00:57.684 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:57.684 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:00:57.684 c_link_args : 00:00:57.684 cpu_instruction_set: native 00:00:57.684 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:00:57.684 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:00:57.684 enable_docs : false 00:00:57.684 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:00:57.684 enable_kmods : false 00:00:57.684 tests : false 00:00:57.684 00:00:57.684 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:58.261 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:00:58.261 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:58.261 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:58.261 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:58.261 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:58.261 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:58.261 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:58.261 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:58.261 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:58.261 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:58.261 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:58.261 [11/265] Linking static target lib/librte_kvargs.a 00:00:58.261 [12/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:58.261 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:58.261 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:58.261 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:58.261 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:58.261 [17/265] Linking static target lib/librte_log.a 00:00:58.261 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:58.524 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:58.524 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:58.524 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:58.790 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.051 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:59.051 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:59.051 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:59.051 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:59.051 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:59.051 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:59.051 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:59.051 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:59.051 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:59.051 [32/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:59.051 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:59.051 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:59.051 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:59.051 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:59.051 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:59.051 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:59.051 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:59.051 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:59.051 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:59.051 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:59.051 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:59.051 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:59.051 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:59.051 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:59.051 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:59.315 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:59.315 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:59.315 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:59.315 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:59.315 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:59.315 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:59.315 [54/265] Linking static target lib/librte_telemetry.a 00:00:59.315 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:59.315 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:59.315 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:59.315 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:59.315 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:59.315 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:59.315 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:59.315 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:59.315 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:59.315 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:59.315 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:59.315 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:59.577 [67/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:59.577 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:59.577 [69/265] Linking static target lib/librte_pci.a 00:00:59.577 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:59.577 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:59.577 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:59.577 [73/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.577 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:59.577 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:59.577 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:59.577 [77/265] Linking target lib/librte_log.so.24.0 00:00:59.577 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:59.577 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:59.577 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:59.840 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:59.840 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:59.840 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:59.840 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:59.840 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:59.840 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:59.840 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:59.840 [88/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:59.840 [89/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:59.840 [90/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:00.103 [91/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.103 [92/265] Linking target lib/librte_kvargs.so.24.0 00:01:00.103 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:00.103 [94/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:00.103 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:00.103 [96/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:00.103 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:00.103 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:00.103 [99/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:00.103 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:00.103 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:00.103 [102/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:00.103 [103/265] Linking static target lib/librte_ring.a 00:01:00.103 [104/265] Linking static target lib/librte_meter.a 00:01:00.103 [105/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:00.103 [106/265] Linking static target lib/librte_eal.a 00:01:00.103 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:00.366 [108/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:00.366 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:00.366 [110/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.366 [111/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:00.366 [112/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:00.366 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:00.366 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:00.366 [115/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:00.366 [116/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:00.366 [117/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:00.366 [118/265] Linking static target lib/librte_rcu.a 00:01:00.366 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:00.366 [120/265] Linking target lib/librte_telemetry.so.24.0 00:01:00.366 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:00.366 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:00.366 [123/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:00.366 [124/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:00.366 [125/265] Linking static target lib/librte_mempool.a 00:01:00.628 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:00.628 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:00.628 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:00.628 [129/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:00.628 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:00.628 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:00.628 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:00.628 [133/265] Linking static target lib/librte_cmdline.a 00:01:00.628 [134/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:00.628 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:00.628 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:00.628 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:00.628 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:00.628 [139/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.628 [140/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:00.889 [141/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:00.889 [142/265] Linking static target lib/librte_net.a 00:01:00.889 [143/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.889 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:00.890 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:00.890 [146/265] Linking static target lib/librte_timer.a 00:01:00.890 [147/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:00.890 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:00.890 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:00.890 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.150 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:01.150 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:01.150 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:01.150 [154/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.150 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:01.150 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:01.150 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:01.150 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:01.150 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:01.150 [160/265] Linking static target lib/librte_dmadev.a 00:01:01.409 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.409 [162/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:01.409 [163/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:01.409 [164/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:01.409 [165/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:01.409 [166/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:01.409 [167/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:01.409 [168/265] Linking static target lib/librte_hash.a 00:01:01.409 [169/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.409 [170/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:01.409 [171/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:01.409 [172/265] Linking static target lib/librte_compressdev.a 00:01:01.409 [173/265] Linking static target lib/librte_power.a 00:01:01.409 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:01.409 [175/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:01.668 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:01.668 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:01.668 [178/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:01.668 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:01.668 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:01.668 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:01.668 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:01.668 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:01.668 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:01.668 [185/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:01.668 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:01.668 [187/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.668 [188/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:01.668 [189/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.668 [190/265] Linking static target lib/librte_reorder.a 00:01:01.668 [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:01.668 [192/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.668 [193/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.668 [194/265] Linking static target drivers/librte_bus_vdev.a 00:01:01.927 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:01.927 [196/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:01.927 [197/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:01.927 [198/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.927 [199/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.927 [200/265] Linking static target lib/librte_mbuf.a 00:01:01.927 [201/265] Linking static target drivers/librte_bus_pci.a 00:01:01.927 [202/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:01.927 [203/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:01.927 [204/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:01.927 [205/265] Linking static target lib/librte_security.a 00:01:01.927 [206/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.927 [207/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.927 [208/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.927 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:01.927 [210/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.927 [211/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.186 [212/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:02.186 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:02.186 [214/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:02.186 [215/265] Linking static target drivers/librte_mempool_ring.a 00:01:02.186 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.186 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:02.186 [218/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.186 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.445 [220/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:02.446 [221/265] Linking static target lib/librte_cryptodev.a 00:01:02.446 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:02.446 [223/265] Linking static target lib/librte_ethdev.a 00:01:03.824 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.760 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:06.665 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.665 [227/265] Linking target lib/librte_eal.so.24.0 00:01:06.665 [228/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.665 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:06.665 [230/265] Linking target lib/librte_ring.so.24.0 00:01:06.665 [231/265] Linking target lib/librte_timer.so.24.0 00:01:06.665 [232/265] Linking target lib/librte_meter.so.24.0 00:01:06.665 [233/265] Linking target lib/librte_pci.so.24.0 00:01:06.665 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:06.665 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:06.952 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:06.952 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:06.952 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:06.952 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:06.952 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:06.952 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:06.952 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:06.952 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:06.952 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:06.952 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:06.952 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:06.952 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:07.211 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:07.211 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:07.211 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:07.211 [251/265] Linking target lib/librte_net.so.24.0 00:01:07.211 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:07.470 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:07.470 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:07.470 [255/265] Linking target lib/librte_hash.so.24.0 00:01:07.470 [256/265] Linking target lib/librte_cmdline.so.24.0 00:01:07.470 [257/265] Linking target lib/librte_security.so.24.0 00:01:07.470 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:07.470 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:07.470 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:07.470 [261/265] Linking target lib/librte_power.so.24.0 00:01:10.003 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:10.003 [263/265] Linking static target lib/librte_vhost.a 00:01:10.941 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.941 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:10.941 INFO: autodetecting backend as ninja 00:01:10.941 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:11.878 CC lib/ut_mock/mock.o 00:01:11.878 CC lib/log/log.o 00:01:11.878 CC lib/log/log_flags.o 00:01:11.878 CC lib/log/log_deprecated.o 00:01:11.878 CC lib/ut/ut.o 00:01:12.136 LIB libspdk_ut_mock.a 00:01:12.136 SO libspdk_ut_mock.so.6.0 00:01:12.136 LIB libspdk_log.a 00:01:12.136 LIB libspdk_ut.a 00:01:12.136 SO libspdk_ut.so.2.0 00:01:12.136 SO libspdk_log.so.7.0 00:01:12.136 SYMLINK libspdk_ut_mock.so 00:01:12.136 SYMLINK libspdk_ut.so 00:01:12.136 SYMLINK libspdk_log.so 00:01:12.393 CC lib/dma/dma.o 00:01:12.393 CC lib/ioat/ioat.o 00:01:12.393 CXX lib/trace_parser/trace.o 00:01:12.393 CC lib/util/base64.o 00:01:12.393 CC lib/util/bit_array.o 00:01:12.393 CC lib/util/cpuset.o 00:01:12.393 CC lib/util/crc16.o 00:01:12.393 CC lib/util/crc32.o 00:01:12.393 CC lib/util/crc32c.o 00:01:12.393 CC lib/util/crc32_ieee.o 00:01:12.393 CC lib/util/crc64.o 00:01:12.393 CC lib/util/dif.o 00:01:12.393 CC lib/util/fd.o 00:01:12.393 CC lib/util/file.o 00:01:12.393 CC lib/util/hexlify.o 00:01:12.393 CC lib/util/iov.o 00:01:12.393 CC lib/util/math.o 00:01:12.393 CC lib/util/pipe.o 00:01:12.393 CC lib/util/strerror_tls.o 00:01:12.393 CC lib/util/string.o 00:01:12.393 CC lib/util/uuid.o 00:01:12.393 CC lib/util/fd_group.o 00:01:12.393 CC lib/util/xor.o 00:01:12.393 CC lib/util/zipf.o 00:01:12.393 CC lib/vfio_user/host/vfio_user_pci.o 00:01:12.393 CC lib/vfio_user/host/vfio_user.o 00:01:12.651 LIB libspdk_dma.a 00:01:12.651 SO libspdk_dma.so.4.0 00:01:12.651 SYMLINK libspdk_dma.so 00:01:12.651 LIB libspdk_ioat.a 00:01:12.651 SO libspdk_ioat.so.7.0 00:01:12.651 LIB libspdk_vfio_user.a 00:01:12.651 SYMLINK libspdk_ioat.so 00:01:12.651 SO libspdk_vfio_user.so.5.0 00:01:12.651 SYMLINK libspdk_vfio_user.so 00:01:12.908 LIB libspdk_util.a 00:01:12.908 SO libspdk_util.so.9.0 00:01:13.166 SYMLINK libspdk_util.so 00:01:13.166 CC lib/conf/conf.o 00:01:13.166 CC lib/rdma/common.o 00:01:13.166 CC lib/idxd/idxd.o 00:01:13.166 CC lib/json/json_parse.o 00:01:13.166 CC lib/vmd/vmd.o 00:01:13.166 CC lib/rdma/rdma_verbs.o 00:01:13.166 CC lib/idxd/idxd_user.o 00:01:13.166 CC lib/env_dpdk/env.o 00:01:13.166 CC lib/vmd/led.o 00:01:13.166 CC lib/json/json_util.o 00:01:13.166 CC lib/env_dpdk/memory.o 00:01:13.166 CC lib/json/json_write.o 00:01:13.166 CC lib/env_dpdk/pci.o 00:01:13.166 CC lib/env_dpdk/init.o 00:01:13.166 CC lib/env_dpdk/threads.o 00:01:13.166 CC lib/env_dpdk/pci_ioat.o 00:01:13.166 CC lib/env_dpdk/pci_virtio.o 00:01:13.166 CC lib/env_dpdk/pci_vmd.o 00:01:13.166 CC lib/env_dpdk/pci_idxd.o 00:01:13.167 CC lib/env_dpdk/pci_event.o 00:01:13.167 CC lib/env_dpdk/sigbus_handler.o 00:01:13.167 CC lib/env_dpdk/pci_dpdk.o 00:01:13.167 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:13.167 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:13.425 LIB libspdk_trace_parser.a 00:01:13.425 SO libspdk_trace_parser.so.5.0 00:01:13.425 SYMLINK libspdk_trace_parser.so 00:01:13.425 LIB libspdk_conf.a 00:01:13.425 SO libspdk_conf.so.6.0 00:01:13.682 LIB libspdk_json.a 00:01:13.682 SYMLINK libspdk_conf.so 00:01:13.682 SO libspdk_json.so.6.0 00:01:13.682 LIB libspdk_rdma.a 00:01:13.682 SYMLINK libspdk_json.so 00:01:13.682 SO libspdk_rdma.so.6.0 00:01:13.682 SYMLINK libspdk_rdma.so 00:01:13.941 LIB libspdk_idxd.a 00:01:13.941 CC lib/jsonrpc/jsonrpc_server.o 00:01:13.941 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:13.941 CC lib/jsonrpc/jsonrpc_client.o 00:01:13.941 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:13.941 SO libspdk_idxd.so.12.0 00:01:13.941 SYMLINK libspdk_idxd.so 00:01:13.941 LIB libspdk_vmd.a 00:01:13.941 SO libspdk_vmd.so.6.0 00:01:13.941 SYMLINK libspdk_vmd.so 00:01:14.199 LIB libspdk_jsonrpc.a 00:01:14.199 SO libspdk_jsonrpc.so.6.0 00:01:14.199 SYMLINK libspdk_jsonrpc.so 00:01:14.458 CC lib/rpc/rpc.o 00:01:14.458 LIB libspdk_rpc.a 00:01:14.717 SO libspdk_rpc.so.6.0 00:01:14.717 SYMLINK libspdk_rpc.so 00:01:14.717 CC lib/trace/trace.o 00:01:14.717 CC lib/keyring/keyring.o 00:01:14.717 CC lib/trace/trace_flags.o 00:01:14.717 CC lib/keyring/keyring_rpc.o 00:01:14.717 CC lib/notify/notify.o 00:01:14.717 CC lib/trace/trace_rpc.o 00:01:14.717 CC lib/notify/notify_rpc.o 00:01:14.975 LIB libspdk_notify.a 00:01:14.975 SO libspdk_notify.so.6.0 00:01:14.975 LIB libspdk_keyring.a 00:01:14.975 SYMLINK libspdk_notify.so 00:01:14.975 SO libspdk_keyring.so.1.0 00:01:14.975 LIB libspdk_trace.a 00:01:15.233 SO libspdk_trace.so.10.0 00:01:15.233 SYMLINK libspdk_keyring.so 00:01:15.233 SYMLINK libspdk_trace.so 00:01:15.233 CC lib/sock/sock.o 00:01:15.233 CC lib/sock/sock_rpc.o 00:01:15.233 CC lib/thread/thread.o 00:01:15.233 CC lib/thread/iobuf.o 00:01:15.491 LIB libspdk_env_dpdk.a 00:01:15.491 SO libspdk_env_dpdk.so.14.0 00:01:15.491 SYMLINK libspdk_env_dpdk.so 00:01:15.750 LIB libspdk_sock.a 00:01:15.750 SO libspdk_sock.so.9.0 00:01:15.750 SYMLINK libspdk_sock.so 00:01:16.010 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:16.010 CC lib/nvme/nvme_ctrlr.o 00:01:16.010 CC lib/nvme/nvme_fabric.o 00:01:16.010 CC lib/nvme/nvme_ns_cmd.o 00:01:16.010 CC lib/nvme/nvme_ns.o 00:01:16.010 CC lib/nvme/nvme_pcie_common.o 00:01:16.010 CC lib/nvme/nvme_pcie.o 00:01:16.010 CC lib/nvme/nvme_qpair.o 00:01:16.010 CC lib/nvme/nvme.o 00:01:16.010 CC lib/nvme/nvme_quirks.o 00:01:16.010 CC lib/nvme/nvme_transport.o 00:01:16.010 CC lib/nvme/nvme_discovery.o 00:01:16.010 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:16.010 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:16.010 CC lib/nvme/nvme_tcp.o 00:01:16.010 CC lib/nvme/nvme_opal.o 00:01:16.010 CC lib/nvme/nvme_io_msg.o 00:01:16.010 CC lib/nvme/nvme_poll_group.o 00:01:16.010 CC lib/nvme/nvme_zns.o 00:01:16.010 CC lib/nvme/nvme_stubs.o 00:01:16.010 CC lib/nvme/nvme_auth.o 00:01:16.010 CC lib/nvme/nvme_cuse.o 00:01:16.010 CC lib/nvme/nvme_vfio_user.o 00:01:16.010 CC lib/nvme/nvme_rdma.o 00:01:16.946 LIB libspdk_thread.a 00:01:16.946 SO libspdk_thread.so.10.0 00:01:16.946 SYMLINK libspdk_thread.so 00:01:17.205 CC lib/init/json_config.o 00:01:17.205 CC lib/vfu_tgt/tgt_endpoint.o 00:01:17.205 CC lib/virtio/virtio.o 00:01:17.205 CC lib/vfu_tgt/tgt_rpc.o 00:01:17.205 CC lib/init/subsystem.o 00:01:17.205 CC lib/virtio/virtio_vhost_user.o 00:01:17.205 CC lib/virtio/virtio_vfio_user.o 00:01:17.205 CC lib/init/subsystem_rpc.o 00:01:17.205 CC lib/virtio/virtio_pci.o 00:01:17.205 CC lib/init/rpc.o 00:01:17.205 CC lib/blob/blobstore.o 00:01:17.205 CC lib/accel/accel.o 00:01:17.205 CC lib/blob/request.o 00:01:17.205 CC lib/accel/accel_rpc.o 00:01:17.205 CC lib/blob/zeroes.o 00:01:17.205 CC lib/accel/accel_sw.o 00:01:17.205 CC lib/blob/blob_bs_dev.o 00:01:17.464 LIB libspdk_init.a 00:01:17.464 SO libspdk_init.so.5.0 00:01:17.464 LIB libspdk_virtio.a 00:01:17.464 LIB libspdk_vfu_tgt.a 00:01:17.464 SYMLINK libspdk_init.so 00:01:17.464 SO libspdk_vfu_tgt.so.3.0 00:01:17.464 SO libspdk_virtio.so.7.0 00:01:17.464 SYMLINK libspdk_vfu_tgt.so 00:01:17.464 SYMLINK libspdk_virtio.so 00:01:17.723 CC lib/event/app.o 00:01:17.723 CC lib/event/reactor.o 00:01:17.723 CC lib/event/log_rpc.o 00:01:17.723 CC lib/event/app_rpc.o 00:01:17.723 CC lib/event/scheduler_static.o 00:01:17.981 LIB libspdk_event.a 00:01:18.240 SO libspdk_event.so.13.0 00:01:18.240 SYMLINK libspdk_event.so 00:01:18.240 LIB libspdk_accel.a 00:01:18.240 SO libspdk_accel.so.15.0 00:01:18.240 SYMLINK libspdk_accel.so 00:01:18.499 LIB libspdk_nvme.a 00:01:18.499 CC lib/bdev/bdev.o 00:01:18.499 CC lib/bdev/bdev_rpc.o 00:01:18.499 CC lib/bdev/bdev_zone.o 00:01:18.499 CC lib/bdev/part.o 00:01:18.499 CC lib/bdev/scsi_nvme.o 00:01:18.499 SO libspdk_nvme.so.13.0 00:01:18.758 SYMLINK libspdk_nvme.so 00:01:20.133 LIB libspdk_blob.a 00:01:20.133 SO libspdk_blob.so.11.0 00:01:20.133 SYMLINK libspdk_blob.so 00:01:20.392 CC lib/blobfs/blobfs.o 00:01:20.392 CC lib/blobfs/tree.o 00:01:20.392 CC lib/lvol/lvol.o 00:01:20.958 LIB libspdk_bdev.a 00:01:20.958 LIB libspdk_blobfs.a 00:01:20.958 SO libspdk_bdev.so.15.0 00:01:20.958 SO libspdk_blobfs.so.10.0 00:01:21.225 LIB libspdk_lvol.a 00:01:21.225 SYMLINK libspdk_blobfs.so 00:01:21.225 SO libspdk_lvol.so.10.0 00:01:21.225 SYMLINK libspdk_bdev.so 00:01:21.225 SYMLINK libspdk_lvol.so 00:01:21.225 CC lib/nbd/nbd.o 00:01:21.225 CC lib/ublk/ublk.o 00:01:21.225 CC lib/ftl/ftl_core.o 00:01:21.225 CC lib/nvmf/ctrlr.o 00:01:21.225 CC lib/nbd/nbd_rpc.o 00:01:21.225 CC lib/ftl/ftl_init.o 00:01:21.225 CC lib/ublk/ublk_rpc.o 00:01:21.225 CC lib/nvmf/ctrlr_discovery.o 00:01:21.225 CC lib/scsi/dev.o 00:01:21.225 CC lib/ftl/ftl_layout.o 00:01:21.225 CC lib/nvmf/ctrlr_bdev.o 00:01:21.225 CC lib/scsi/lun.o 00:01:21.225 CC lib/ftl/ftl_debug.o 00:01:21.225 CC lib/nvmf/subsystem.o 00:01:21.225 CC lib/scsi/port.o 00:01:21.225 CC lib/nvmf/nvmf.o 00:01:21.225 CC lib/scsi/scsi.o 00:01:21.225 CC lib/ftl/ftl_io.o 00:01:21.225 CC lib/ftl/ftl_sb.o 00:01:21.225 CC lib/nvmf/nvmf_rpc.o 00:01:21.225 CC lib/scsi/scsi_bdev.o 00:01:21.225 CC lib/ftl/ftl_l2p.o 00:01:21.225 CC lib/nvmf/transport.o 00:01:21.225 CC lib/scsi/scsi_pr.o 00:01:21.225 CC lib/ftl/ftl_l2p_flat.o 00:01:21.225 CC lib/nvmf/tcp.o 00:01:21.225 CC lib/scsi/scsi_rpc.o 00:01:21.225 CC lib/ftl/ftl_nv_cache.o 00:01:21.225 CC lib/nvmf/vfio_user.o 00:01:21.225 CC lib/scsi/task.o 00:01:21.225 CC lib/ftl/ftl_band_ops.o 00:01:21.225 CC lib/ftl/ftl_band.o 00:01:21.225 CC lib/nvmf/rdma.o 00:01:21.225 CC lib/ftl/ftl_rq.o 00:01:21.225 CC lib/ftl/ftl_writer.o 00:01:21.225 CC lib/ftl/ftl_reloc.o 00:01:21.225 CC lib/ftl/ftl_l2p_cache.o 00:01:21.225 CC lib/ftl/ftl_p2l.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:21.225 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:21.801 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:21.801 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:21.801 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:21.801 CC lib/ftl/utils/ftl_conf.o 00:01:21.801 CC lib/ftl/utils/ftl_md.o 00:01:21.801 CC lib/ftl/utils/ftl_mempool.o 00:01:21.801 CC lib/ftl/utils/ftl_bitmap.o 00:01:21.801 CC lib/ftl/utils/ftl_property.o 00:01:21.801 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:21.801 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:21.801 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:21.801 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:21.801 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:21.801 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:21.801 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:21.801 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:21.801 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:21.801 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:21.801 CC lib/ftl/base/ftl_base_dev.o 00:01:21.801 CC lib/ftl/base/ftl_base_bdev.o 00:01:21.801 CC lib/ftl/ftl_trace.o 00:01:22.059 LIB libspdk_nbd.a 00:01:22.059 SO libspdk_nbd.so.7.0 00:01:22.059 SYMLINK libspdk_nbd.so 00:01:22.317 LIB libspdk_scsi.a 00:01:22.317 LIB libspdk_ublk.a 00:01:22.317 SO libspdk_scsi.so.9.0 00:01:22.317 SO libspdk_ublk.so.3.0 00:01:22.317 SYMLINK libspdk_ublk.so 00:01:22.317 SYMLINK libspdk_scsi.so 00:01:22.575 CC lib/vhost/vhost.o 00:01:22.575 CC lib/iscsi/conn.o 00:01:22.575 CC lib/iscsi/init_grp.o 00:01:22.575 CC lib/vhost/vhost_rpc.o 00:01:22.575 CC lib/iscsi/iscsi.o 00:01:22.575 CC lib/vhost/vhost_scsi.o 00:01:22.575 CC lib/iscsi/md5.o 00:01:22.575 CC lib/vhost/vhost_blk.o 00:01:22.575 CC lib/iscsi/param.o 00:01:22.575 CC lib/vhost/rte_vhost_user.o 00:01:22.575 CC lib/iscsi/portal_grp.o 00:01:22.575 CC lib/iscsi/tgt_node.o 00:01:22.575 CC lib/iscsi/iscsi_subsystem.o 00:01:22.575 CC lib/iscsi/iscsi_rpc.o 00:01:22.575 CC lib/iscsi/task.o 00:01:22.575 LIB libspdk_ftl.a 00:01:22.833 SO libspdk_ftl.so.9.0 00:01:23.091 SYMLINK libspdk_ftl.so 00:01:23.658 LIB libspdk_vhost.a 00:01:23.916 SO libspdk_vhost.so.8.0 00:01:23.916 LIB libspdk_nvmf.a 00:01:23.916 SYMLINK libspdk_vhost.so 00:01:23.916 SO libspdk_nvmf.so.18.0 00:01:23.916 LIB libspdk_iscsi.a 00:01:24.174 SO libspdk_iscsi.so.8.0 00:01:24.174 SYMLINK libspdk_nvmf.so 00:01:24.174 SYMLINK libspdk_iscsi.so 00:01:24.459 CC module/env_dpdk/env_dpdk_rpc.o 00:01:24.459 CC module/vfu_device/vfu_virtio.o 00:01:24.459 CC module/vfu_device/vfu_virtio_blk.o 00:01:24.459 CC module/vfu_device/vfu_virtio_scsi.o 00:01:24.459 CC module/vfu_device/vfu_virtio_rpc.o 00:01:24.459 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:24.459 CC module/blob/bdev/blob_bdev.o 00:01:24.459 CC module/sock/posix/posix.o 00:01:24.718 CC module/keyring/file/keyring.o 00:01:24.718 CC module/accel/ioat/accel_ioat.o 00:01:24.718 CC module/accel/iaa/accel_iaa.o 00:01:24.718 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:24.718 CC module/scheduler/gscheduler/gscheduler.o 00:01:24.718 CC module/keyring/file/keyring_rpc.o 00:01:24.718 CC module/accel/ioat/accel_ioat_rpc.o 00:01:24.718 CC module/accel/iaa/accel_iaa_rpc.o 00:01:24.718 CC module/accel/error/accel_error.o 00:01:24.718 CC module/accel/dsa/accel_dsa.o 00:01:24.718 CC module/accel/error/accel_error_rpc.o 00:01:24.718 CC module/accel/dsa/accel_dsa_rpc.o 00:01:24.718 LIB libspdk_env_dpdk_rpc.a 00:01:24.718 SO libspdk_env_dpdk_rpc.so.6.0 00:01:24.718 SYMLINK libspdk_env_dpdk_rpc.so 00:01:24.718 LIB libspdk_keyring_file.a 00:01:24.718 LIB libspdk_scheduler_gscheduler.a 00:01:24.718 LIB libspdk_scheduler_dpdk_governor.a 00:01:24.718 SO libspdk_scheduler_gscheduler.so.4.0 00:01:24.718 SO libspdk_keyring_file.so.1.0 00:01:24.718 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:24.718 LIB libspdk_accel_error.a 00:01:24.718 LIB libspdk_accel_ioat.a 00:01:24.718 LIB libspdk_scheduler_dynamic.a 00:01:24.718 LIB libspdk_accel_iaa.a 00:01:24.718 SO libspdk_accel_error.so.2.0 00:01:24.718 SO libspdk_scheduler_dynamic.so.4.0 00:01:24.718 SO libspdk_accel_ioat.so.6.0 00:01:24.718 SYMLINK libspdk_scheduler_gscheduler.so 00:01:24.718 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:24.718 SYMLINK libspdk_keyring_file.so 00:01:24.718 SO libspdk_accel_iaa.so.3.0 00:01:24.718 LIB libspdk_accel_dsa.a 00:01:24.976 SYMLINK libspdk_accel_error.so 00:01:24.976 LIB libspdk_blob_bdev.a 00:01:24.976 SYMLINK libspdk_scheduler_dynamic.so 00:01:24.976 SO libspdk_accel_dsa.so.5.0 00:01:24.976 SYMLINK libspdk_accel_ioat.so 00:01:24.976 SO libspdk_blob_bdev.so.11.0 00:01:24.976 SYMLINK libspdk_accel_iaa.so 00:01:24.976 SYMLINK libspdk_accel_dsa.so 00:01:24.976 SYMLINK libspdk_blob_bdev.so 00:01:25.235 LIB libspdk_vfu_device.a 00:01:25.236 CC module/bdev/error/vbdev_error.o 00:01:25.236 CC module/bdev/error/vbdev_error_rpc.o 00:01:25.236 CC module/blobfs/bdev/blobfs_bdev.o 00:01:25.236 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:25.236 CC module/bdev/delay/vbdev_delay.o 00:01:25.236 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:25.236 CC module/bdev/passthru/vbdev_passthru.o 00:01:25.236 CC module/bdev/lvol/vbdev_lvol.o 00:01:25.236 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:25.236 CC module/bdev/null/bdev_null.o 00:01:25.236 CC module/bdev/gpt/gpt.o 00:01:25.236 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:25.236 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:25.236 CC module/bdev/null/bdev_null_rpc.o 00:01:25.236 CC module/bdev/aio/bdev_aio.o 00:01:25.236 CC module/bdev/raid/bdev_raid.o 00:01:25.236 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:25.236 CC module/bdev/gpt/vbdev_gpt.o 00:01:25.236 CC module/bdev/malloc/bdev_malloc.o 00:01:25.236 CC module/bdev/nvme/bdev_nvme.o 00:01:25.236 CC module/bdev/aio/bdev_aio_rpc.o 00:01:25.236 CC module/bdev/ftl/bdev_ftl.o 00:01:25.236 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:25.236 CC module/bdev/raid/bdev_raid_rpc.o 00:01:25.236 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:25.236 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:25.236 CC module/bdev/nvme/nvme_rpc.o 00:01:25.236 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:25.236 CC module/bdev/raid/bdev_raid_sb.o 00:01:25.236 CC module/bdev/nvme/bdev_mdns_client.o 00:01:25.236 CC module/bdev/raid/raid0.o 00:01:25.236 CC module/bdev/raid/raid1.o 00:01:25.236 CC module/bdev/nvme/vbdev_opal.o 00:01:25.236 CC module/bdev/raid/concat.o 00:01:25.236 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:25.236 CC module/bdev/split/vbdev_split.o 00:01:25.236 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:25.236 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:25.236 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:25.236 CC module/bdev/split/vbdev_split_rpc.o 00:01:25.236 CC module/bdev/iscsi/bdev_iscsi.o 00:01:25.236 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:25.236 SO libspdk_vfu_device.so.3.0 00:01:25.236 SYMLINK libspdk_vfu_device.so 00:01:25.494 LIB libspdk_sock_posix.a 00:01:25.494 SO libspdk_sock_posix.so.6.0 00:01:25.494 LIB libspdk_blobfs_bdev.a 00:01:25.494 SO libspdk_blobfs_bdev.so.6.0 00:01:25.494 LIB libspdk_bdev_passthru.a 00:01:25.753 LIB libspdk_bdev_split.a 00:01:25.753 SYMLINK libspdk_sock_posix.so 00:01:25.753 SO libspdk_bdev_passthru.so.6.0 00:01:25.753 SO libspdk_bdev_split.so.6.0 00:01:25.753 LIB libspdk_bdev_null.a 00:01:25.753 SYMLINK libspdk_blobfs_bdev.so 00:01:25.753 LIB libspdk_bdev_error.a 00:01:25.753 SO libspdk_bdev_null.so.6.0 00:01:25.753 SYMLINK libspdk_bdev_passthru.so 00:01:25.753 LIB libspdk_bdev_ftl.a 00:01:25.753 LIB libspdk_bdev_gpt.a 00:01:25.753 SYMLINK libspdk_bdev_split.so 00:01:25.753 SO libspdk_bdev_error.so.6.0 00:01:25.753 SO libspdk_bdev_gpt.so.6.0 00:01:25.753 SO libspdk_bdev_ftl.so.6.0 00:01:25.753 SYMLINK libspdk_bdev_null.so 00:01:25.753 LIB libspdk_bdev_aio.a 00:01:25.753 LIB libspdk_bdev_iscsi.a 00:01:25.753 SYMLINK libspdk_bdev_error.so 00:01:25.753 SO libspdk_bdev_aio.so.6.0 00:01:25.753 SYMLINK libspdk_bdev_gpt.so 00:01:25.753 LIB libspdk_bdev_zone_block.a 00:01:25.753 SYMLINK libspdk_bdev_ftl.so 00:01:25.753 LIB libspdk_bdev_delay.a 00:01:25.753 SO libspdk_bdev_iscsi.so.6.0 00:01:25.753 LIB libspdk_bdev_malloc.a 00:01:25.753 LIB libspdk_bdev_lvol.a 00:01:25.753 SO libspdk_bdev_zone_block.so.6.0 00:01:25.753 SO libspdk_bdev_delay.so.6.0 00:01:25.753 SO libspdk_bdev_malloc.so.6.0 00:01:25.753 SYMLINK libspdk_bdev_aio.so 00:01:25.753 SO libspdk_bdev_lvol.so.6.0 00:01:25.753 SYMLINK libspdk_bdev_iscsi.so 00:01:25.753 SYMLINK libspdk_bdev_zone_block.so 00:01:25.753 SYMLINK libspdk_bdev_delay.so 00:01:26.011 SYMLINK libspdk_bdev_malloc.so 00:01:26.011 SYMLINK libspdk_bdev_lvol.so 00:01:26.011 LIB libspdk_bdev_virtio.a 00:01:26.011 SO libspdk_bdev_virtio.so.6.0 00:01:26.011 SYMLINK libspdk_bdev_virtio.so 00:01:26.270 LIB libspdk_bdev_raid.a 00:01:26.270 SO libspdk_bdev_raid.so.6.0 00:01:26.529 SYMLINK libspdk_bdev_raid.so 00:01:27.464 LIB libspdk_bdev_nvme.a 00:01:27.464 SO libspdk_bdev_nvme.so.7.0 00:01:27.722 SYMLINK libspdk_bdev_nvme.so 00:01:27.988 CC module/event/subsystems/iobuf/iobuf.o 00:01:27.988 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:27.988 CC module/event/subsystems/vmd/vmd.o 00:01:27.988 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:27.988 CC module/event/subsystems/scheduler/scheduler.o 00:01:27.988 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:27.988 CC module/event/subsystems/sock/sock.o 00:01:27.988 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:27.988 CC module/event/subsystems/keyring/keyring.o 00:01:28.252 LIB libspdk_event_sock.a 00:01:28.252 LIB libspdk_event_keyring.a 00:01:28.252 LIB libspdk_event_vhost_blk.a 00:01:28.252 LIB libspdk_event_vmd.a 00:01:28.252 LIB libspdk_event_scheduler.a 00:01:28.252 LIB libspdk_event_vfu_tgt.a 00:01:28.252 LIB libspdk_event_iobuf.a 00:01:28.252 SO libspdk_event_keyring.so.1.0 00:01:28.252 SO libspdk_event_sock.so.5.0 00:01:28.252 SO libspdk_event_vhost_blk.so.3.0 00:01:28.252 SO libspdk_event_scheduler.so.4.0 00:01:28.252 SO libspdk_event_vfu_tgt.so.3.0 00:01:28.252 SO libspdk_event_vmd.so.6.0 00:01:28.252 SO libspdk_event_iobuf.so.3.0 00:01:28.252 SYMLINK libspdk_event_keyring.so 00:01:28.252 SYMLINK libspdk_event_sock.so 00:01:28.252 SYMLINK libspdk_event_vhost_blk.so 00:01:28.252 SYMLINK libspdk_event_scheduler.so 00:01:28.252 SYMLINK libspdk_event_vfu_tgt.so 00:01:28.252 SYMLINK libspdk_event_vmd.so 00:01:28.252 SYMLINK libspdk_event_iobuf.so 00:01:28.512 CC module/event/subsystems/accel/accel.o 00:01:28.512 LIB libspdk_event_accel.a 00:01:28.512 SO libspdk_event_accel.so.6.0 00:01:28.770 SYMLINK libspdk_event_accel.so 00:01:28.770 CC module/event/subsystems/bdev/bdev.o 00:01:29.029 LIB libspdk_event_bdev.a 00:01:29.029 SO libspdk_event_bdev.so.6.0 00:01:29.029 SYMLINK libspdk_event_bdev.so 00:01:29.288 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:29.288 CC module/event/subsystems/ublk/ublk.o 00:01:29.288 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:29.288 CC module/event/subsystems/scsi/scsi.o 00:01:29.288 CC module/event/subsystems/nbd/nbd.o 00:01:29.288 LIB libspdk_event_nbd.a 00:01:29.288 LIB libspdk_event_ublk.a 00:01:29.288 LIB libspdk_event_scsi.a 00:01:29.547 SO libspdk_event_nbd.so.6.0 00:01:29.547 SO libspdk_event_ublk.so.3.0 00:01:29.547 SO libspdk_event_scsi.so.6.0 00:01:29.547 SYMLINK libspdk_event_nbd.so 00:01:29.547 SYMLINK libspdk_event_ublk.so 00:01:29.547 SYMLINK libspdk_event_scsi.so 00:01:29.547 LIB libspdk_event_nvmf.a 00:01:29.547 SO libspdk_event_nvmf.so.6.0 00:01:29.547 SYMLINK libspdk_event_nvmf.so 00:01:29.547 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:29.547 CC module/event/subsystems/iscsi/iscsi.o 00:01:29.804 LIB libspdk_event_vhost_scsi.a 00:01:29.804 SO libspdk_event_vhost_scsi.so.3.0 00:01:29.804 LIB libspdk_event_iscsi.a 00:01:29.804 SO libspdk_event_iscsi.so.6.0 00:01:29.804 SYMLINK libspdk_event_vhost_scsi.so 00:01:29.804 SYMLINK libspdk_event_iscsi.so 00:01:30.066 SO libspdk.so.6.0 00:01:30.066 SYMLINK libspdk.so 00:01:30.333 CXX app/trace/trace.o 00:01:30.333 TEST_HEADER include/spdk/accel.h 00:01:30.333 TEST_HEADER include/spdk/accel_module.h 00:01:30.333 CC app/spdk_nvme_perf/perf.o 00:01:30.333 CC app/spdk_lspci/spdk_lspci.o 00:01:30.333 TEST_HEADER include/spdk/assert.h 00:01:30.333 CC app/spdk_nvme_identify/identify.o 00:01:30.333 TEST_HEADER include/spdk/barrier.h 00:01:30.333 TEST_HEADER include/spdk/base64.h 00:01:30.333 CC app/trace_record/trace_record.o 00:01:30.333 CC test/rpc_client/rpc_client_test.o 00:01:30.333 CC app/spdk_top/spdk_top.o 00:01:30.333 TEST_HEADER include/spdk/bdev.h 00:01:30.333 CC app/spdk_nvme_discover/discovery_aer.o 00:01:30.333 TEST_HEADER include/spdk/bdev_module.h 00:01:30.333 TEST_HEADER include/spdk/bdev_zone.h 00:01:30.333 TEST_HEADER include/spdk/bit_array.h 00:01:30.333 TEST_HEADER include/spdk/bit_pool.h 00:01:30.333 TEST_HEADER include/spdk/blob_bdev.h 00:01:30.333 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:30.333 TEST_HEADER include/spdk/blobfs.h 00:01:30.333 TEST_HEADER include/spdk/blob.h 00:01:30.333 TEST_HEADER include/spdk/conf.h 00:01:30.333 TEST_HEADER include/spdk/config.h 00:01:30.333 TEST_HEADER include/spdk/cpuset.h 00:01:30.333 TEST_HEADER include/spdk/crc16.h 00:01:30.333 TEST_HEADER include/spdk/crc32.h 00:01:30.333 TEST_HEADER include/spdk/crc64.h 00:01:30.333 TEST_HEADER include/spdk/dif.h 00:01:30.333 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:30.333 TEST_HEADER include/spdk/dma.h 00:01:30.333 CC app/nvmf_tgt/nvmf_main.o 00:01:30.333 CC app/spdk_dd/spdk_dd.o 00:01:30.333 TEST_HEADER include/spdk/endian.h 00:01:30.333 TEST_HEADER include/spdk/env_dpdk.h 00:01:30.333 TEST_HEADER include/spdk/env.h 00:01:30.333 CC app/iscsi_tgt/iscsi_tgt.o 00:01:30.333 TEST_HEADER include/spdk/event.h 00:01:30.333 TEST_HEADER include/spdk/fd_group.h 00:01:30.333 TEST_HEADER include/spdk/fd.h 00:01:30.333 TEST_HEADER include/spdk/file.h 00:01:30.333 TEST_HEADER include/spdk/ftl.h 00:01:30.333 TEST_HEADER include/spdk/gpt_spec.h 00:01:30.333 TEST_HEADER include/spdk/hexlify.h 00:01:30.333 CC app/vhost/vhost.o 00:01:30.333 TEST_HEADER include/spdk/histogram_data.h 00:01:30.333 TEST_HEADER include/spdk/idxd.h 00:01:30.333 TEST_HEADER include/spdk/idxd_spec.h 00:01:30.333 TEST_HEADER include/spdk/init.h 00:01:30.333 TEST_HEADER include/spdk/ioat.h 00:01:30.333 TEST_HEADER include/spdk/ioat_spec.h 00:01:30.333 CC app/spdk_tgt/spdk_tgt.o 00:01:30.333 CC examples/sock/hello_world/hello_sock.o 00:01:30.333 CC test/env/pci/pci_ut.o 00:01:30.333 TEST_HEADER include/spdk/iscsi_spec.h 00:01:30.333 CC test/env/vtophys/vtophys.o 00:01:30.333 CC test/app/jsoncat/jsoncat.o 00:01:30.333 TEST_HEADER include/spdk/json.h 00:01:30.333 TEST_HEADER include/spdk/jsonrpc.h 00:01:30.333 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:30.333 CC examples/ioat/verify/verify.o 00:01:30.333 CC examples/ioat/perf/perf.o 00:01:30.333 CC test/env/memory/memory_ut.o 00:01:30.333 CC test/app/stub/stub.o 00:01:30.333 TEST_HEADER include/spdk/keyring.h 00:01:30.333 CC app/fio/nvme/fio_plugin.o 00:01:30.333 TEST_HEADER include/spdk/keyring_module.h 00:01:30.333 CC test/app/histogram_perf/histogram_perf.o 00:01:30.333 TEST_HEADER include/spdk/likely.h 00:01:30.333 CC test/nvme/aer/aer.o 00:01:30.333 CC examples/idxd/perf/perf.o 00:01:30.333 CC examples/vmd/lsvmd/lsvmd.o 00:01:30.333 TEST_HEADER include/spdk/log.h 00:01:30.333 CC examples/util/zipf/zipf.o 00:01:30.333 CC examples/nvme/hello_world/hello_world.o 00:01:30.333 TEST_HEADER include/spdk/lvol.h 00:01:30.333 CC test/event/event_perf/event_perf.o 00:01:30.333 TEST_HEADER include/spdk/memory.h 00:01:30.333 CC test/thread/poller_perf/poller_perf.o 00:01:30.333 TEST_HEADER include/spdk/mmio.h 00:01:30.333 CC examples/accel/perf/accel_perf.o 00:01:30.333 TEST_HEADER include/spdk/nbd.h 00:01:30.333 TEST_HEADER include/spdk/notify.h 00:01:30.333 TEST_HEADER include/spdk/nvme.h 00:01:30.333 TEST_HEADER include/spdk/nvme_intel.h 00:01:30.333 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:30.333 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:30.333 CC examples/blob/cli/blobcli.o 00:01:30.594 TEST_HEADER include/spdk/nvme_spec.h 00:01:30.594 TEST_HEADER include/spdk/nvme_zns.h 00:01:30.594 CC examples/blob/hello_world/hello_blob.o 00:01:30.594 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:30.594 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:30.594 CC test/blobfs/mkfs/mkfs.o 00:01:30.594 TEST_HEADER include/spdk/nvmf.h 00:01:30.594 CC test/bdev/bdevio/bdevio.o 00:01:30.594 CC test/accel/dif/dif.o 00:01:30.594 CC examples/thread/thread/thread_ex.o 00:01:30.594 TEST_HEADER include/spdk/nvmf_spec.h 00:01:30.594 CC test/dma/test_dma/test_dma.o 00:01:30.594 CC test/app/bdev_svc/bdev_svc.o 00:01:30.594 TEST_HEADER include/spdk/nvmf_transport.h 00:01:30.594 TEST_HEADER include/spdk/opal.h 00:01:30.594 TEST_HEADER include/spdk/opal_spec.h 00:01:30.594 CC examples/bdev/hello_world/hello_bdev.o 00:01:30.594 CC app/fio/bdev/fio_plugin.o 00:01:30.594 TEST_HEADER include/spdk/pci_ids.h 00:01:30.594 TEST_HEADER include/spdk/pipe.h 00:01:30.594 TEST_HEADER include/spdk/queue.h 00:01:30.594 CC examples/nvmf/nvmf/nvmf.o 00:01:30.594 TEST_HEADER include/spdk/reduce.h 00:01:30.594 TEST_HEADER include/spdk/rpc.h 00:01:30.594 TEST_HEADER include/spdk/scheduler.h 00:01:30.594 TEST_HEADER include/spdk/scsi.h 00:01:30.594 TEST_HEADER include/spdk/scsi_spec.h 00:01:30.594 TEST_HEADER include/spdk/sock.h 00:01:30.594 TEST_HEADER include/spdk/stdinc.h 00:01:30.594 TEST_HEADER include/spdk/string.h 00:01:30.594 TEST_HEADER include/spdk/thread.h 00:01:30.594 LINK spdk_lspci 00:01:30.594 TEST_HEADER include/spdk/trace.h 00:01:30.594 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:30.594 TEST_HEADER include/spdk/trace_parser.h 00:01:30.594 TEST_HEADER include/spdk/tree.h 00:01:30.594 TEST_HEADER include/spdk/ublk.h 00:01:30.594 CC test/env/mem_callbacks/mem_callbacks.o 00:01:30.594 TEST_HEADER include/spdk/util.h 00:01:30.594 TEST_HEADER include/spdk/uuid.h 00:01:30.594 TEST_HEADER include/spdk/version.h 00:01:30.594 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:30.594 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:30.594 TEST_HEADER include/spdk/vhost.h 00:01:30.594 TEST_HEADER include/spdk/vmd.h 00:01:30.594 CC test/lvol/esnap/esnap.o 00:01:30.594 TEST_HEADER include/spdk/xor.h 00:01:30.594 TEST_HEADER include/spdk/zipf.h 00:01:30.594 CXX test/cpp_headers/accel.o 00:01:30.594 LINK rpc_client_test 00:01:30.594 LINK spdk_nvme_discover 00:01:30.594 LINK interrupt_tgt 00:01:30.594 LINK lsvmd 00:01:30.594 LINK jsoncat 00:01:30.594 LINK nvmf_tgt 00:01:30.594 LINK histogram_perf 00:01:30.867 LINK vtophys 00:01:30.867 LINK event_perf 00:01:30.867 LINK env_dpdk_post_init 00:01:30.867 LINK poller_perf 00:01:30.867 LINK spdk_trace_record 00:01:30.867 LINK vhost 00:01:30.867 LINK stub 00:01:30.867 LINK zipf 00:01:30.867 LINK iscsi_tgt 00:01:30.867 LINK ioat_perf 00:01:30.867 LINK verify 00:01:30.867 LINK spdk_tgt 00:01:30.867 LINK hello_sock 00:01:30.867 LINK hello_world 00:01:30.867 LINK mkfs 00:01:30.867 LINK bdev_svc 00:01:30.867 LINK hello_blob 00:01:31.134 CXX test/cpp_headers/accel_module.o 00:01:31.134 LINK aer 00:01:31.134 LINK hello_bdev 00:01:31.134 LINK thread 00:01:31.134 LINK spdk_dd 00:01:31.134 LINK idxd_perf 00:01:31.134 CC examples/nvme/reconnect/reconnect.o 00:01:31.134 CXX test/cpp_headers/assert.o 00:01:31.134 CXX test/cpp_headers/barrier.o 00:01:31.134 LINK nvmf 00:01:31.134 CXX test/cpp_headers/base64.o 00:01:31.134 LINK pci_ut 00:01:31.134 LINK spdk_trace 00:01:31.134 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:31.134 CC test/nvme/reset/reset.o 00:01:31.134 CC test/event/reactor/reactor.o 00:01:31.134 CXX test/cpp_headers/bdev.o 00:01:31.134 CC examples/vmd/led/led.o 00:01:31.134 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:31.134 LINK dif 00:01:31.134 CC examples/bdev/bdevperf/bdevperf.o 00:01:31.134 CC test/nvme/sgl/sgl.o 00:01:31.134 CC examples/nvme/arbitration/arbitration.o 00:01:31.134 CC test/nvme/e2edp/nvme_dp.o 00:01:31.134 LINK bdevio 00:01:31.134 CC test/event/reactor_perf/reactor_perf.o 00:01:31.400 LINK test_dma 00:01:31.400 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:31.400 CC examples/nvme/hotplug/hotplug.o 00:01:31.400 CXX test/cpp_headers/bdev_module.o 00:01:31.400 CC test/nvme/overhead/overhead.o 00:01:31.400 LINK accel_perf 00:01:31.400 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:31.400 CC test/nvme/startup/startup.o 00:01:31.400 LINK nvme_fuzz 00:01:31.400 CC test/nvme/err_injection/err_injection.o 00:01:31.400 CXX test/cpp_headers/bdev_zone.o 00:01:31.400 CC test/nvme/reserve/reserve.o 00:01:31.401 LINK blobcli 00:01:31.401 LINK spdk_nvme 00:01:31.401 LINK reactor 00:01:31.401 LINK spdk_bdev 00:01:31.401 CC examples/nvme/abort/abort.o 00:01:31.401 CXX test/cpp_headers/bit_array.o 00:01:31.401 CC test/event/app_repeat/app_repeat.o 00:01:31.401 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:31.401 LINK led 00:01:31.663 CC test/nvme/simple_copy/simple_copy.o 00:01:31.663 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:31.663 LINK reactor_perf 00:01:31.663 CC test/nvme/connect_stress/connect_stress.o 00:01:31.663 CXX test/cpp_headers/bit_pool.o 00:01:31.663 CXX test/cpp_headers/blob_bdev.o 00:01:31.663 CC test/nvme/boot_partition/boot_partition.o 00:01:31.663 CC test/nvme/compliance/nvme_compliance.o 00:01:31.663 CXX test/cpp_headers/blobfs_bdev.o 00:01:31.663 CXX test/cpp_headers/blobfs.o 00:01:31.663 CXX test/cpp_headers/blob.o 00:01:31.663 CC test/nvme/fused_ordering/fused_ordering.o 00:01:31.663 CC test/event/scheduler/scheduler.o 00:01:31.663 CXX test/cpp_headers/conf.o 00:01:31.663 CXX test/cpp_headers/config.o 00:01:31.663 CXX test/cpp_headers/cpuset.o 00:01:31.663 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:31.663 LINK reset 00:01:31.663 CC test/nvme/fdp/fdp.o 00:01:31.663 LINK startup 00:01:31.663 CXX test/cpp_headers/crc16.o 00:01:31.663 LINK reconnect 00:01:31.663 LINK nvme_dp 00:01:31.663 LINK sgl 00:01:31.663 CXX test/cpp_headers/crc32.o 00:01:31.663 LINK err_injection 00:01:31.663 LINK hotplug 00:01:31.927 LINK spdk_nvme_perf 00:01:31.927 CXX test/cpp_headers/crc64.o 00:01:31.927 CC test/nvme/cuse/cuse.o 00:01:31.927 LINK mem_callbacks 00:01:31.927 LINK reserve 00:01:31.927 LINK spdk_nvme_identify 00:01:31.927 CXX test/cpp_headers/dif.o 00:01:31.927 LINK app_repeat 00:01:31.927 LINK arbitration 00:01:31.927 CXX test/cpp_headers/dma.o 00:01:31.927 CXX test/cpp_headers/endian.o 00:01:31.927 CXX test/cpp_headers/env_dpdk.o 00:01:31.927 CXX test/cpp_headers/env.o 00:01:31.927 LINK pmr_persistence 00:01:31.927 LINK overhead 00:01:31.927 LINK cmb_copy 00:01:31.927 LINK spdk_top 00:01:31.927 CXX test/cpp_headers/event.o 00:01:31.927 LINK connect_stress 00:01:31.927 LINK simple_copy 00:01:31.927 LINK boot_partition 00:01:32.191 CXX test/cpp_headers/fd_group.o 00:01:32.191 CXX test/cpp_headers/fd.o 00:01:32.191 CXX test/cpp_headers/file.o 00:01:32.191 CXX test/cpp_headers/ftl.o 00:01:32.191 CXX test/cpp_headers/hexlify.o 00:01:32.191 CXX test/cpp_headers/histogram_data.o 00:01:32.191 CXX test/cpp_headers/gpt_spec.o 00:01:32.191 CXX test/cpp_headers/idxd.o 00:01:32.191 CXX test/cpp_headers/idxd_spec.o 00:01:32.191 CXX test/cpp_headers/init.o 00:01:32.191 CXX test/cpp_headers/ioat.o 00:01:32.191 CXX test/cpp_headers/ioat_spec.o 00:01:32.191 CXX test/cpp_headers/iscsi_spec.o 00:01:32.191 LINK fused_ordering 00:01:32.191 CXX test/cpp_headers/json.o 00:01:32.191 CXX test/cpp_headers/jsonrpc.o 00:01:32.191 LINK vhost_fuzz 00:01:32.191 LINK nvme_manage 00:01:32.191 CXX test/cpp_headers/keyring.o 00:01:32.191 LINK doorbell_aers 00:01:32.191 LINK memory_ut 00:01:32.191 CXX test/cpp_headers/keyring_module.o 00:01:32.191 CXX test/cpp_headers/likely.o 00:01:32.191 CXX test/cpp_headers/log.o 00:01:32.191 CXX test/cpp_headers/lvol.o 00:01:32.191 LINK scheduler 00:01:32.191 CXX test/cpp_headers/mmio.o 00:01:32.191 CXX test/cpp_headers/memory.o 00:01:32.191 CXX test/cpp_headers/nbd.o 00:01:32.191 CXX test/cpp_headers/notify.o 00:01:32.191 CXX test/cpp_headers/nvme.o 00:01:32.191 CXX test/cpp_headers/nvme_intel.o 00:01:32.191 CXX test/cpp_headers/nvme_ocssd.o 00:01:32.191 LINK abort 00:01:32.191 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:32.191 CXX test/cpp_headers/nvme_spec.o 00:01:32.191 CXX test/cpp_headers/nvme_zns.o 00:01:32.191 CXX test/cpp_headers/nvmf_cmd.o 00:01:32.191 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:32.191 LINK nvme_compliance 00:01:32.453 CXX test/cpp_headers/nvmf.o 00:01:32.453 CXX test/cpp_headers/nvmf_spec.o 00:01:32.453 LINK fdp 00:01:32.453 CXX test/cpp_headers/nvmf_transport.o 00:01:32.453 CXX test/cpp_headers/opal.o 00:01:32.453 CXX test/cpp_headers/opal_spec.o 00:01:32.453 CXX test/cpp_headers/pci_ids.o 00:01:32.453 CXX test/cpp_headers/pipe.o 00:01:32.453 CXX test/cpp_headers/queue.o 00:01:32.453 CXX test/cpp_headers/reduce.o 00:01:32.453 CXX test/cpp_headers/rpc.o 00:01:32.453 CXX test/cpp_headers/scheduler.o 00:01:32.453 CXX test/cpp_headers/scsi.o 00:01:32.453 CXX test/cpp_headers/scsi_spec.o 00:01:32.453 CXX test/cpp_headers/sock.o 00:01:32.453 CXX test/cpp_headers/stdinc.o 00:01:32.453 CXX test/cpp_headers/string.o 00:01:32.453 CXX test/cpp_headers/thread.o 00:01:32.453 CXX test/cpp_headers/trace.o 00:01:32.453 CXX test/cpp_headers/trace_parser.o 00:01:32.453 CXX test/cpp_headers/tree.o 00:01:32.453 CXX test/cpp_headers/ublk.o 00:01:32.453 CXX test/cpp_headers/util.o 00:01:32.713 CXX test/cpp_headers/uuid.o 00:01:32.713 CXX test/cpp_headers/version.o 00:01:32.713 CXX test/cpp_headers/vfio_user_spec.o 00:01:32.713 CXX test/cpp_headers/vfio_user_pci.o 00:01:32.713 CXX test/cpp_headers/vhost.o 00:01:32.713 CXX test/cpp_headers/vmd.o 00:01:32.713 CXX test/cpp_headers/xor.o 00:01:32.713 CXX test/cpp_headers/zipf.o 00:01:32.713 LINK bdevperf 00:01:33.279 LINK cuse 00:01:33.538 LINK iscsi_fuzz 00:01:36.071 LINK esnap 00:01:36.329 00:01:36.329 real 0m47.639s 00:01:36.329 user 10m0.895s 00:01:36.329 sys 2m27.042s 00:01:36.329 03:15:13 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:36.329 03:15:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.329 ************************************ 00:01:36.329 END TEST make 00:01:36.329 ************************************ 00:01:36.329 03:15:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:36.329 03:15:13 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:36.329 03:15:13 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:36.329 03:15:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.329 03:15:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:36.329 03:15:13 -- pm/common@45 -- $ pid=44674 00:01:36.329 03:15:13 -- pm/common@52 -- $ sudo kill -TERM 44674 00:01:36.329 03:15:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.329 03:15:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:36.329 03:15:13 -- pm/common@45 -- $ pid=44673 00:01:36.329 03:15:13 -- pm/common@52 -- $ sudo kill -TERM 44673 00:01:36.329 03:15:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.329 03:15:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:36.329 03:15:13 -- pm/common@45 -- $ pid=44672 00:01:36.329 03:15:13 -- pm/common@52 -- $ sudo kill -TERM 44672 00:01:36.329 03:15:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.329 03:15:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:36.329 03:15:13 -- pm/common@45 -- $ pid=44679 00:01:36.329 03:15:13 -- pm/common@52 -- $ sudo kill -TERM 44679 00:01:36.588 03:15:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:36.588 03:15:13 -- nvmf/common.sh@7 -- # uname -s 00:01:36.588 03:15:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:36.588 03:15:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:36.588 03:15:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:36.588 03:15:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:36.588 03:15:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:36.588 03:15:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:36.588 03:15:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:36.588 03:15:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:36.588 03:15:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:36.588 03:15:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:36.588 03:15:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:36.588 03:15:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:36.588 03:15:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:36.588 03:15:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:36.588 03:15:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:36.588 03:15:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:36.588 03:15:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:36.588 03:15:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:36.588 03:15:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.588 03:15:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.588 03:15:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.588 03:15:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.588 03:15:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.588 03:15:13 -- paths/export.sh@5 -- # export PATH 00:01:36.588 03:15:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.588 03:15:13 -- nvmf/common.sh@47 -- # : 0 00:01:36.588 03:15:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:36.588 03:15:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:36.588 03:15:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:36.588 03:15:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:36.588 03:15:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:36.588 03:15:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:36.588 03:15:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:36.588 03:15:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:36.588 03:15:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:36.588 03:15:13 -- spdk/autotest.sh@32 -- # uname -s 00:01:36.588 03:15:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:36.588 03:15:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:36.588 03:15:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:36.588 03:15:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:36.588 03:15:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:36.588 03:15:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:36.588 03:15:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:36.588 03:15:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:36.588 03:15:13 -- spdk/autotest.sh@48 -- # udevadm_pid=100024 00:01:36.588 03:15:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:36.588 03:15:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:36.588 03:15:13 -- pm/common@17 -- # local monitor 00:01:36.588 03:15:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.588 03:15:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=100025 00:01:36.588 03:15:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.588 03:15:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=100027 00:01:36.588 03:15:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.588 03:15:13 -- pm/common@21 -- # date +%s 00:01:36.588 03:15:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=100031 00:01:36.588 03:15:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.588 03:15:13 -- pm/common@21 -- # date +%s 00:01:36.588 03:15:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=100034 00:01:36.588 03:15:13 -- pm/common@26 -- # sleep 1 00:01:36.588 03:15:13 -- pm/common@21 -- # date +%s 00:01:36.588 03:15:13 -- pm/common@21 -- # date +%s 00:01:36.588 03:15:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713489313 00:01:36.588 03:15:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713489313 00:01:36.588 03:15:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713489313 00:01:36.588 03:15:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713489313 00:01:36.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713489313_collect-bmc-pm.bmc.pm.log 00:01:36.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713489313_collect-vmstat.pm.log 00:01:36.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713489313_collect-cpu-load.pm.log 00:01:36.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713489313_collect-cpu-temp.pm.log 00:01:37.522 03:15:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:37.522 03:15:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:37.522 03:15:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:37.522 03:15:14 -- common/autotest_common.sh@10 -- # set +x 00:01:37.522 03:15:14 -- spdk/autotest.sh@59 -- # create_test_list 00:01:37.522 03:15:14 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:37.522 03:15:14 -- common/autotest_common.sh@10 -- # set +x 00:01:37.522 03:15:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:37.522 03:15:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.522 03:15:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.522 03:15:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.522 03:15:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.522 03:15:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:37.522 03:15:14 -- common/autotest_common.sh@1441 -- # uname 00:01:37.522 03:15:14 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:37.522 03:15:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:37.522 03:15:14 -- common/autotest_common.sh@1461 -- # uname 00:01:37.522 03:15:14 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:37.522 03:15:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:37.522 03:15:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:37.522 03:15:14 -- spdk/autotest.sh@72 -- # hash lcov 00:01:37.522 03:15:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:37.522 03:15:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:37.522 --rc lcov_branch_coverage=1 00:01:37.522 --rc lcov_function_coverage=1 00:01:37.522 --rc genhtml_branch_coverage=1 00:01:37.522 --rc genhtml_function_coverage=1 00:01:37.522 --rc genhtml_legend=1 00:01:37.523 --rc geninfo_all_blocks=1 00:01:37.523 ' 00:01:37.523 03:15:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:37.523 --rc lcov_branch_coverage=1 00:01:37.523 --rc lcov_function_coverage=1 00:01:37.523 --rc genhtml_branch_coverage=1 00:01:37.523 --rc genhtml_function_coverage=1 00:01:37.523 --rc genhtml_legend=1 00:01:37.523 --rc geninfo_all_blocks=1 00:01:37.523 ' 00:01:37.523 03:15:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:37.523 --rc lcov_branch_coverage=1 00:01:37.523 --rc lcov_function_coverage=1 00:01:37.523 --rc genhtml_branch_coverage=1 00:01:37.523 --rc genhtml_function_coverage=1 00:01:37.523 --rc genhtml_legend=1 00:01:37.523 --rc geninfo_all_blocks=1 00:01:37.523 --no-external' 00:01:37.523 03:15:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:37.523 --rc lcov_branch_coverage=1 00:01:37.523 --rc lcov_function_coverage=1 00:01:37.523 --rc genhtml_branch_coverage=1 00:01:37.523 --rc genhtml_function_coverage=1 00:01:37.523 --rc genhtml_legend=1 00:01:37.523 --rc geninfo_all_blocks=1 00:01:37.523 --no-external' 00:01:37.523 03:15:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:37.523 lcov: LCOV version 1.14 00:01:37.523 03:15:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:49.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:49.762 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:51.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:51.136 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:51.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:51.136 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:51.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:51.136 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:09.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:09.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:09.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:09.475 03:15:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:09.475 03:15:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:09.475 03:15:46 -- common/autotest_common.sh@10 -- # set +x 00:02:09.475 03:15:46 -- spdk/autotest.sh@91 -- # rm -f 00:02:09.475 03:15:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:10.851 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:10.851 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:10.851 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:10.851 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:10.851 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:10.851 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:10.851 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:10.851 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:10.851 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:10.851 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:10.851 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:10.851 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:10.851 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:10.851 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:10.851 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:10.851 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:10.851 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:10.851 03:15:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:10.851 03:15:48 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:10.851 03:15:48 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:10.851 03:15:48 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:10.851 03:15:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:10.851 03:15:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:10.851 03:15:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:10.851 03:15:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:10.851 03:15:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:10.851 03:15:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:10.851 03:15:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:10.851 03:15:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:10.851 03:15:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:10.851 03:15:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:10.851 03:15:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:10.851 No valid GPT data, bailing 00:02:10.851 03:15:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:10.851 03:15:48 -- scripts/common.sh@391 -- # pt= 00:02:10.851 03:15:48 -- scripts/common.sh@392 -- # return 1 00:02:10.851 03:15:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:10.851 1+0 records in 00:02:10.851 1+0 records out 00:02:10.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00253027 s, 414 MB/s 00:02:10.851 03:15:48 -- spdk/autotest.sh@118 -- # sync 00:02:10.851 03:15:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:10.851 03:15:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:10.851 03:15:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:12.754 03:15:50 -- spdk/autotest.sh@124 -- # uname -s 00:02:12.754 03:15:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:12.754 03:15:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:12.754 03:15:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:12.754 03:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:12.754 03:15:50 -- common/autotest_common.sh@10 -- # set +x 00:02:12.754 ************************************ 00:02:12.754 START TEST setup.sh 00:02:12.754 ************************************ 00:02:12.754 03:15:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:12.754 * Looking for test storage... 00:02:12.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:12.754 03:15:50 -- setup/test-setup.sh@10 -- # uname -s 00:02:12.754 03:15:50 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:12.754 03:15:50 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:12.754 03:15:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:12.754 03:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:12.754 03:15:50 -- common/autotest_common.sh@10 -- # set +x 00:02:13.013 ************************************ 00:02:13.013 START TEST acl 00:02:13.013 ************************************ 00:02:13.013 03:15:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:13.013 * Looking for test storage... 00:02:13.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:13.013 03:15:50 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:13.013 03:15:50 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:13.013 03:15:50 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:13.013 03:15:50 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:13.013 03:15:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:13.013 03:15:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:13.013 03:15:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:13.013 03:15:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:13.013 03:15:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:13.013 03:15:50 -- setup/acl.sh@12 -- # devs=() 00:02:13.013 03:15:50 -- setup/acl.sh@12 -- # declare -a devs 00:02:13.013 03:15:50 -- setup/acl.sh@13 -- # drivers=() 00:02:13.013 03:15:50 -- setup/acl.sh@13 -- # declare -A drivers 00:02:13.013 03:15:50 -- setup/acl.sh@51 -- # setup reset 00:02:13.013 03:15:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:13.013 03:15:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:14.388 03:15:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:14.388 03:15:51 -- setup/acl.sh@16 -- # local dev driver 00:02:14.388 03:15:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:14.388 03:15:51 -- setup/acl.sh@15 -- # setup output status 00:02:14.388 03:15:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:14.388 03:15:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:15.764 Hugepages 00:02:15.764 node hugesize free / total 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 00:02:15.764 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # continue 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:15.764 03:15:53 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:15.764 03:15:53 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:15.764 03:15:53 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:15.764 03:15:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.764 03:15:53 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:15.764 03:15:53 -- setup/acl.sh@54 -- # run_test denied denied 00:02:15.764 03:15:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:15.764 03:15:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:15.764 03:15:53 -- common/autotest_common.sh@10 -- # set +x 00:02:15.764 ************************************ 00:02:15.764 START TEST denied 00:02:15.764 ************************************ 00:02:15.764 03:15:53 -- common/autotest_common.sh@1111 -- # denied 00:02:15.764 03:15:53 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:15.764 03:15:53 -- setup/acl.sh@38 -- # setup output config 00:02:15.764 03:15:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:15.764 03:15:53 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:15.764 03:15:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:17.142 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:17.142 03:15:54 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:17.142 03:15:54 -- setup/acl.sh@28 -- # local dev driver 00:02:17.142 03:15:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:17.142 03:15:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:17.142 03:15:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:17.142 03:15:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:17.142 03:15:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:17.142 03:15:54 -- setup/acl.sh@41 -- # setup reset 00:02:17.142 03:15:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:17.142 03:15:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:19.673 00:02:19.673 real 0m3.563s 00:02:19.673 user 0m1.026s 00:02:19.673 sys 0m1.710s 00:02:19.673 03:15:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:19.673 03:15:56 -- common/autotest_common.sh@10 -- # set +x 00:02:19.673 ************************************ 00:02:19.673 END TEST denied 00:02:19.673 ************************************ 00:02:19.673 03:15:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:19.673 03:15:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:19.673 03:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:19.673 03:15:56 -- common/autotest_common.sh@10 -- # set +x 00:02:19.673 ************************************ 00:02:19.673 START TEST allowed 00:02:19.673 ************************************ 00:02:19.673 03:15:56 -- common/autotest_common.sh@1111 -- # allowed 00:02:19.673 03:15:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:19.673 03:15:56 -- setup/acl.sh@45 -- # setup output config 00:02:19.673 03:15:56 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:19.673 03:15:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:19.673 03:15:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:22.210 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:22.210 03:15:59 -- setup/acl.sh@47 -- # verify 00:02:22.210 03:15:59 -- setup/acl.sh@28 -- # local dev driver 00:02:22.210 03:15:59 -- setup/acl.sh@48 -- # setup reset 00:02:22.210 03:15:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:22.210 03:15:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.149 00:02:23.149 real 0m3.613s 00:02:23.149 user 0m0.975s 00:02:23.149 sys 0m1.562s 00:02:23.149 03:16:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:23.149 03:16:00 -- common/autotest_common.sh@10 -- # set +x 00:02:23.149 ************************************ 00:02:23.149 END TEST allowed 00:02:23.149 ************************************ 00:02:23.149 00:02:23.149 real 0m10.182s 00:02:23.149 user 0m3.176s 00:02:23.149 sys 0m5.167s 00:02:23.149 03:16:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:23.149 03:16:00 -- common/autotest_common.sh@10 -- # set +x 00:02:23.149 ************************************ 00:02:23.149 END TEST acl 00:02:23.149 ************************************ 00:02:23.149 03:16:00 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:23.149 03:16:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.149 03:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.149 03:16:00 -- common/autotest_common.sh@10 -- # set +x 00:02:23.149 ************************************ 00:02:23.149 START TEST hugepages 00:02:23.149 ************************************ 00:02:23.149 03:16:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:23.444 * Looking for test storage... 00:02:23.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.444 03:16:00 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:23.444 03:16:00 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:23.444 03:16:00 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:23.444 03:16:00 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:23.444 03:16:00 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:23.444 03:16:00 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:23.444 03:16:00 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:23.444 03:16:00 -- setup/common.sh@18 -- # local node= 00:02:23.444 03:16:00 -- setup/common.sh@19 -- # local var val 00:02:23.444 03:16:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:23.444 03:16:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:23.444 03:16:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:23.444 03:16:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:23.444 03:16:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:23.444 03:16:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:23.444 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.444 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 36226208 kB' 'MemAvailable: 39919996 kB' 'Buffers: 2696 kB' 'Cached: 17767088 kB' 'SwapCached: 0 kB' 'Active: 14707248 kB' 'Inactive: 3498348 kB' 'Active(anon): 14120076 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 438932 kB' 'Mapped: 210380 kB' 'Shmem: 13684264 kB' 'KReclaimable: 204212 kB' 'Slab: 585232 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 381020 kB' 'KernelStack: 12800 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 15278168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196536 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.445 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.445 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # continue 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.446 03:16:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.446 03:16:00 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.446 03:16:00 -- setup/common.sh@33 -- # echo 2048 00:02:23.446 03:16:00 -- setup/common.sh@33 -- # return 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:23.446 03:16:00 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:23.446 03:16:00 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:23.446 03:16:00 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:23.446 03:16:00 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:23.446 03:16:00 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:23.446 03:16:00 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:23.446 03:16:00 -- setup/hugepages.sh@207 -- # get_nodes 00:02:23.446 03:16:00 -- setup/hugepages.sh@27 -- # local node 00:02:23.446 03:16:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:23.446 03:16:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:23.446 03:16:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:23.446 03:16:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:23.446 03:16:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:23.446 03:16:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:23.446 03:16:00 -- setup/hugepages.sh@208 -- # clear_hp 00:02:23.446 03:16:00 -- setup/hugepages.sh@37 -- # local node hp 00:02:23.446 03:16:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:23.446 03:16:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.446 03:16:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.446 03:16:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:23.446 03:16:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.446 03:16:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.446 03:16:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:23.446 03:16:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:23.446 03:16:00 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:23.446 03:16:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.446 03:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.446 03:16:00 -- common/autotest_common.sh@10 -- # set +x 00:02:23.446 ************************************ 00:02:23.446 START TEST default_setup 00:02:23.446 ************************************ 00:02:23.446 03:16:00 -- common/autotest_common.sh@1111 -- # default_setup 00:02:23.446 03:16:00 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:23.446 03:16:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:23.446 03:16:00 -- setup/hugepages.sh@51 -- # shift 00:02:23.446 03:16:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:23.446 03:16:00 -- setup/hugepages.sh@52 -- # local node_ids 00:02:23.446 03:16:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:23.446 03:16:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:23.446 03:16:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:23.446 03:16:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:23.446 03:16:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:23.446 03:16:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:23.446 03:16:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:23.446 03:16:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:23.446 03:16:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:23.446 03:16:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:23.446 03:16:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:23.446 03:16:00 -- setup/hugepages.sh@73 -- # return 0 00:02:23.446 03:16:00 -- setup/hugepages.sh@137 -- # setup output 00:02:23.446 03:16:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.446 03:16:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:24.821 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:24.821 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:24.821 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:25.765 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:25.765 03:16:03 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:25.765 03:16:03 -- setup/hugepages.sh@89 -- # local node 00:02:25.765 03:16:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:25.765 03:16:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:25.765 03:16:03 -- setup/hugepages.sh@92 -- # local surp 00:02:25.765 03:16:03 -- setup/hugepages.sh@93 -- # local resv 00:02:25.765 03:16:03 -- setup/hugepages.sh@94 -- # local anon 00:02:25.765 03:16:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:25.765 03:16:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:25.765 03:16:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:25.765 03:16:03 -- setup/common.sh@18 -- # local node= 00:02:25.765 03:16:03 -- setup/common.sh@19 -- # local var val 00:02:25.765 03:16:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:25.765 03:16:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:25.765 03:16:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:25.765 03:16:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:25.765 03:16:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:25.765 03:16:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38335476 kB' 'MemAvailable: 42029264 kB' 'Buffers: 2696 kB' 'Cached: 17767184 kB' 'SwapCached: 0 kB' 'Active: 14729444 kB' 'Inactive: 3498348 kB' 'Active(anon): 14142272 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461100 kB' 'Mapped: 210868 kB' 'Shmem: 13684360 kB' 'KReclaimable: 204212 kB' 'Slab: 584504 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380292 kB' 'KernelStack: 12880 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15307824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.765 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.765 03:16:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:25.766 03:16:03 -- setup/common.sh@33 -- # echo 0 00:02:25.766 03:16:03 -- setup/common.sh@33 -- # return 0 00:02:25.766 03:16:03 -- setup/hugepages.sh@97 -- # anon=0 00:02:25.766 03:16:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:25.766 03:16:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:25.766 03:16:03 -- setup/common.sh@18 -- # local node= 00:02:25.766 03:16:03 -- setup/common.sh@19 -- # local var val 00:02:25.766 03:16:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:25.766 03:16:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:25.766 03:16:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:25.766 03:16:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:25.766 03:16:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:25.766 03:16:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38332516 kB' 'MemAvailable: 42026304 kB' 'Buffers: 2696 kB' 'Cached: 17767184 kB' 'SwapCached: 0 kB' 'Active: 14731400 kB' 'Inactive: 3498348 kB' 'Active(anon): 14144228 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463192 kB' 'Mapped: 211416 kB' 'Shmem: 13684360 kB' 'KReclaimable: 204212 kB' 'Slab: 584548 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380336 kB' 'KernelStack: 12816 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15309168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196588 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.766 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.766 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:25.767 03:16:03 -- setup/common.sh@33 -- # echo 0 00:02:25.767 03:16:03 -- setup/common.sh@33 -- # return 0 00:02:25.767 03:16:03 -- setup/hugepages.sh@99 -- # surp=0 00:02:25.767 03:16:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:25.767 03:16:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:25.767 03:16:03 -- setup/common.sh@18 -- # local node= 00:02:25.767 03:16:03 -- setup/common.sh@19 -- # local var val 00:02:25.767 03:16:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:25.767 03:16:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:25.767 03:16:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:25.767 03:16:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:25.767 03:16:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:25.767 03:16:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38332232 kB' 'MemAvailable: 42026020 kB' 'Buffers: 2696 kB' 'Cached: 17767200 kB' 'SwapCached: 0 kB' 'Active: 14725784 kB' 'Inactive: 3498348 kB' 'Active(anon): 14138612 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457548 kB' 'Mapped: 211340 kB' 'Shmem: 13684376 kB' 'KReclaimable: 204212 kB' 'Slab: 584500 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380288 kB' 'KernelStack: 12800 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15304684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.767 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.767 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:25.768 03:16:03 -- setup/common.sh@33 -- # echo 0 00:02:25.768 03:16:03 -- setup/common.sh@33 -- # return 0 00:02:25.768 03:16:03 -- setup/hugepages.sh@100 -- # resv=0 00:02:25.768 03:16:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:25.768 nr_hugepages=1024 00:02:25.768 03:16:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:25.768 resv_hugepages=0 00:02:25.768 03:16:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:25.768 surplus_hugepages=0 00:02:25.768 03:16:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:25.768 anon_hugepages=0 00:02:25.768 03:16:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:25.768 03:16:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:25.768 03:16:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:25.768 03:16:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:25.768 03:16:03 -- setup/common.sh@18 -- # local node= 00:02:25.768 03:16:03 -- setup/common.sh@19 -- # local var val 00:02:25.768 03:16:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:25.768 03:16:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:25.768 03:16:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:25.768 03:16:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:25.768 03:16:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:25.768 03:16:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38331864 kB' 'MemAvailable: 42025652 kB' 'Buffers: 2696 kB' 'Cached: 17767212 kB' 'SwapCached: 0 kB' 'Active: 14728976 kB' 'Inactive: 3498348 kB' 'Active(anon): 14141804 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461128 kB' 'Mapped: 210856 kB' 'Shmem: 13684388 kB' 'KReclaimable: 204212 kB' 'Slab: 584500 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380288 kB' 'KernelStack: 12848 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15307996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.768 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.768 03:16:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:25.769 03:16:03 -- setup/common.sh@32 -- # continue 00:02:25.769 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:26.030 03:16:03 -- setup/common.sh@33 -- # echo 1024 00:02:26.030 03:16:03 -- setup/common.sh@33 -- # return 0 00:02:26.030 03:16:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:26.030 03:16:03 -- setup/hugepages.sh@112 -- # get_nodes 00:02:26.030 03:16:03 -- setup/hugepages.sh@27 -- # local node 00:02:26.030 03:16:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:26.030 03:16:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:26.030 03:16:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:26.030 03:16:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:26.030 03:16:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:26.030 03:16:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:26.030 03:16:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:26.030 03:16:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:26.030 03:16:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:26.030 03:16:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:26.030 03:16:03 -- setup/common.sh@18 -- # local node=0 00:02:26.030 03:16:03 -- setup/common.sh@19 -- # local var val 00:02:26.030 03:16:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:26.030 03:16:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:26.030 03:16:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:26.030 03:16:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:26.030 03:16:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:26.030 03:16:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18823072 kB' 'MemUsed: 14053868 kB' 'SwapCached: 0 kB' 'Active: 7601152 kB' 'Inactive: 3329304 kB' 'Active(anon): 7368548 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10762928 kB' 'Mapped: 105336 kB' 'AnonPages: 170744 kB' 'Shmem: 7201020 kB' 'KernelStack: 6040 kB' 'PageTables: 5180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327856 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.030 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.030 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # continue 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.031 03:16:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.031 03:16:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:26.031 03:16:03 -- setup/common.sh@33 -- # echo 0 00:02:26.031 03:16:03 -- setup/common.sh@33 -- # return 0 00:02:26.031 03:16:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:26.031 03:16:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:26.031 03:16:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:26.031 03:16:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:26.031 03:16:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:26.031 node0=1024 expecting 1024 00:02:26.031 03:16:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:26.031 00:02:26.031 real 0m2.462s 00:02:26.031 user 0m0.615s 00:02:26.031 sys 0m0.808s 00:02:26.031 03:16:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:26.031 03:16:03 -- common/autotest_common.sh@10 -- # set +x 00:02:26.031 ************************************ 00:02:26.031 END TEST default_setup 00:02:26.031 ************************************ 00:02:26.031 03:16:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:26.031 03:16:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.031 03:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.031 03:16:03 -- common/autotest_common.sh@10 -- # set +x 00:02:26.031 ************************************ 00:02:26.031 START TEST per_node_1G_alloc 00:02:26.031 ************************************ 00:02:26.031 03:16:03 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:26.031 03:16:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:26.031 03:16:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:26.031 03:16:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:26.031 03:16:03 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:26.031 03:16:03 -- setup/hugepages.sh@51 -- # shift 00:02:26.031 03:16:03 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:26.031 03:16:03 -- setup/hugepages.sh@52 -- # local node_ids 00:02:26.031 03:16:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:26.031 03:16:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:26.031 03:16:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:26.031 03:16:03 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:26.031 03:16:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:26.031 03:16:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:26.031 03:16:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:26.031 03:16:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:26.031 03:16:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:26.031 03:16:03 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:26.031 03:16:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:26.031 03:16:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:26.031 03:16:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:26.031 03:16:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:26.031 03:16:03 -- setup/hugepages.sh@73 -- # return 0 00:02:26.031 03:16:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:26.031 03:16:03 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:26.031 03:16:03 -- setup/hugepages.sh@146 -- # setup output 00:02:26.031 03:16:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.031 03:16:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:27.412 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:27.412 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:27.412 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:27.412 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:27.412 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:27.412 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:27.412 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:27.412 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:27.412 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:27.412 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:27.412 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:27.412 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:27.412 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:27.412 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:27.412 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:27.412 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:27.412 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:27.412 03:16:04 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:27.412 03:16:04 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:27.412 03:16:04 -- setup/hugepages.sh@89 -- # local node 00:02:27.412 03:16:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:27.412 03:16:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:27.412 03:16:04 -- setup/hugepages.sh@92 -- # local surp 00:02:27.412 03:16:04 -- setup/hugepages.sh@93 -- # local resv 00:02:27.412 03:16:04 -- setup/hugepages.sh@94 -- # local anon 00:02:27.412 03:16:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:27.412 03:16:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:27.412 03:16:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:27.412 03:16:04 -- setup/common.sh@18 -- # local node= 00:02:27.412 03:16:04 -- setup/common.sh@19 -- # local var val 00:02:27.412 03:16:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:27.412 03:16:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.412 03:16:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:27.412 03:16:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:27.412 03:16:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.412 03:16:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38299604 kB' 'MemAvailable: 41993392 kB' 'Buffers: 2696 kB' 'Cached: 17767264 kB' 'SwapCached: 0 kB' 'Active: 14724772 kB' 'Inactive: 3498348 kB' 'Active(anon): 14137600 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456360 kB' 'Mapped: 210452 kB' 'Shmem: 13684440 kB' 'KReclaimable: 204212 kB' 'Slab: 584256 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380044 kB' 'KernelStack: 12864 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15301744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.412 03:16:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.412 03:16:04 -- setup/common.sh@33 -- # echo 0 00:02:27.412 03:16:04 -- setup/common.sh@33 -- # return 0 00:02:27.412 03:16:04 -- setup/hugepages.sh@97 -- # anon=0 00:02:27.412 03:16:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:27.412 03:16:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:27.412 03:16:04 -- setup/common.sh@18 -- # local node= 00:02:27.412 03:16:04 -- setup/common.sh@19 -- # local var val 00:02:27.412 03:16:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:27.412 03:16:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.412 03:16:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:27.412 03:16:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:27.412 03:16:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.412 03:16:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.412 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38300240 kB' 'MemAvailable: 41994028 kB' 'Buffers: 2696 kB' 'Cached: 17767272 kB' 'SwapCached: 0 kB' 'Active: 14725068 kB' 'Inactive: 3498348 kB' 'Active(anon): 14137896 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456744 kB' 'Mapped: 210448 kB' 'Shmem: 13684448 kB' 'KReclaimable: 204212 kB' 'Slab: 584224 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380012 kB' 'KernelStack: 12896 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15301756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.413 03:16:04 -- setup/common.sh@33 -- # echo 0 00:02:27.413 03:16:04 -- setup/common.sh@33 -- # return 0 00:02:27.413 03:16:04 -- setup/hugepages.sh@99 -- # surp=0 00:02:27.413 03:16:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:27.413 03:16:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:27.413 03:16:04 -- setup/common.sh@18 -- # local node= 00:02:27.413 03:16:04 -- setup/common.sh@19 -- # local var val 00:02:27.413 03:16:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:27.413 03:16:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.413 03:16:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:27.413 03:16:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:27.413 03:16:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.413 03:16:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38300240 kB' 'MemAvailable: 41994028 kB' 'Buffers: 2696 kB' 'Cached: 17767272 kB' 'SwapCached: 0 kB' 'Active: 14725068 kB' 'Inactive: 3498348 kB' 'Active(anon): 14137896 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456752 kB' 'Mapped: 210448 kB' 'Shmem: 13684448 kB' 'KReclaimable: 204212 kB' 'Slab: 584280 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380068 kB' 'KernelStack: 12912 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15301768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.413 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.413 03:16:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:27.414 03:16:04 -- setup/common.sh@33 -- # echo 0 00:02:27.414 03:16:04 -- setup/common.sh@33 -- # return 0 00:02:27.414 03:16:04 -- setup/hugepages.sh@100 -- # resv=0 00:02:27.414 03:16:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:27.414 nr_hugepages=1024 00:02:27.414 03:16:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:27.414 resv_hugepages=0 00:02:27.414 03:16:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:27.414 surplus_hugepages=0 00:02:27.414 03:16:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:27.414 anon_hugepages=0 00:02:27.414 03:16:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:27.414 03:16:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:27.414 03:16:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:27.414 03:16:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:27.414 03:16:04 -- setup/common.sh@18 -- # local node= 00:02:27.414 03:16:04 -- setup/common.sh@19 -- # local var val 00:02:27.414 03:16:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:27.414 03:16:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.414 03:16:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:27.414 03:16:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:27.414 03:16:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.414 03:16:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38300240 kB' 'MemAvailable: 41994028 kB' 'Buffers: 2696 kB' 'Cached: 17767300 kB' 'SwapCached: 0 kB' 'Active: 14725424 kB' 'Inactive: 3498348 kB' 'Active(anon): 14138252 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457028 kB' 'Mapped: 210448 kB' 'Shmem: 13684476 kB' 'KReclaimable: 204212 kB' 'Slab: 584280 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380068 kB' 'KernelStack: 12912 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15301784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.414 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.414 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:27.415 03:16:04 -- setup/common.sh@33 -- # echo 1024 00:02:27.415 03:16:04 -- setup/common.sh@33 -- # return 0 00:02:27.415 03:16:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:27.415 03:16:04 -- setup/hugepages.sh@112 -- # get_nodes 00:02:27.415 03:16:04 -- setup/hugepages.sh@27 -- # local node 00:02:27.415 03:16:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:27.415 03:16:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:27.415 03:16:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:27.415 03:16:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:27.415 03:16:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:27.415 03:16:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:27.415 03:16:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:27.415 03:16:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:27.415 03:16:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:27.415 03:16:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:27.415 03:16:04 -- setup/common.sh@18 -- # local node=0 00:02:27.415 03:16:04 -- setup/common.sh@19 -- # local var val 00:02:27.415 03:16:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:27.415 03:16:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.415 03:16:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:27.415 03:16:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:27.415 03:16:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.415 03:16:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19856008 kB' 'MemUsed: 13020932 kB' 'SwapCached: 0 kB' 'Active: 7599976 kB' 'Inactive: 3329304 kB' 'Active(anon): 7367372 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10762944 kB' 'Mapped: 105160 kB' 'AnonPages: 169468 kB' 'Shmem: 7201036 kB' 'KernelStack: 6072 kB' 'PageTables: 5268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327748 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.415 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.415 03:16:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@33 -- # echo 0 00:02:27.416 03:16:04 -- setup/common.sh@33 -- # return 0 00:02:27.416 03:16:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:27.416 03:16:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:27.416 03:16:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:27.416 03:16:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:27.416 03:16:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:27.416 03:16:04 -- setup/common.sh@18 -- # local node=1 00:02:27.416 03:16:04 -- setup/common.sh@19 -- # local var val 00:02:27.416 03:16:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:27.416 03:16:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.416 03:16:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:27.416 03:16:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:27.416 03:16:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.416 03:16:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 18444232 kB' 'MemUsed: 9220544 kB' 'SwapCached: 0 kB' 'Active: 7125472 kB' 'Inactive: 169044 kB' 'Active(anon): 6770904 kB' 'Inactive(anon): 0 kB' 'Active(file): 354568 kB' 'Inactive(file): 169044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7007080 kB' 'Mapped: 105288 kB' 'AnonPages: 287560 kB' 'Shmem: 6483468 kB' 'KernelStack: 6840 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98316 kB' 'Slab: 256532 kB' 'SReclaimable: 98316 kB' 'SUnreclaim: 158216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # continue 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:27.416 03:16:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:27.416 03:16:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.416 03:16:04 -- setup/common.sh@33 -- # echo 0 00:02:27.416 03:16:04 -- setup/common.sh@33 -- # return 0 00:02:27.416 03:16:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:27.416 03:16:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:27.416 03:16:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:27.416 03:16:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:27.416 03:16:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:27.416 node0=512 expecting 512 00:02:27.416 03:16:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:27.416 03:16:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:27.416 03:16:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:27.416 03:16:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:27.416 node1=512 expecting 512 00:02:27.416 03:16:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:27.416 00:02:27.416 real 0m1.366s 00:02:27.416 user 0m0.533s 00:02:27.416 sys 0m0.792s 00:02:27.416 03:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:27.416 03:16:04 -- common/autotest_common.sh@10 -- # set +x 00:02:27.416 ************************************ 00:02:27.416 END TEST per_node_1G_alloc 00:02:27.416 ************************************ 00:02:27.416 03:16:04 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:27.416 03:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:27.416 03:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:27.416 03:16:04 -- common/autotest_common.sh@10 -- # set +x 00:02:27.676 ************************************ 00:02:27.676 START TEST even_2G_alloc 00:02:27.676 ************************************ 00:02:27.676 03:16:04 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:27.676 03:16:04 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:27.676 03:16:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:27.676 03:16:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:27.676 03:16:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:27.676 03:16:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:27.676 03:16:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:27.676 03:16:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:27.676 03:16:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:27.676 03:16:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:27.676 03:16:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:27.676 03:16:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:27.676 03:16:04 -- setup/hugepages.sh@83 -- # : 512 00:02:27.676 03:16:04 -- setup/hugepages.sh@84 -- # : 1 00:02:27.676 03:16:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:27.676 03:16:04 -- setup/hugepages.sh@83 -- # : 0 00:02:27.676 03:16:04 -- setup/hugepages.sh@84 -- # : 0 00:02:27.676 03:16:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:27.676 03:16:04 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:27.676 03:16:04 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:27.676 03:16:04 -- setup/hugepages.sh@153 -- # setup output 00:02:27.676 03:16:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.676 03:16:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:28.613 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:28.613 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:28.613 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:28.613 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:28.613 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:28.613 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:28.613 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:28.613 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:28.613 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:28.613 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:28.876 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:28.876 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:28.876 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:28.876 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:28.876 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:28.876 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:28.876 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:28.876 03:16:06 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:28.876 03:16:06 -- setup/hugepages.sh@89 -- # local node 00:02:28.876 03:16:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:28.876 03:16:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:28.876 03:16:06 -- setup/hugepages.sh@92 -- # local surp 00:02:28.876 03:16:06 -- setup/hugepages.sh@93 -- # local resv 00:02:28.876 03:16:06 -- setup/hugepages.sh@94 -- # local anon 00:02:28.876 03:16:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:28.876 03:16:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:28.876 03:16:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:28.876 03:16:06 -- setup/common.sh@18 -- # local node= 00:02:28.876 03:16:06 -- setup/common.sh@19 -- # local var val 00:02:28.876 03:16:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.876 03:16:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.876 03:16:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.876 03:16:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.876 03:16:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.876 03:16:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38310600 kB' 'MemAvailable: 42004388 kB' 'Buffers: 2696 kB' 'Cached: 17767364 kB' 'SwapCached: 0 kB' 'Active: 14725292 kB' 'Inactive: 3498348 kB' 'Active(anon): 14138120 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455572 kB' 'Mapped: 210276 kB' 'Shmem: 13684540 kB' 'KReclaimable: 204212 kB' 'Slab: 584316 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380104 kB' 'KernelStack: 12896 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15305488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.876 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.876 03:16:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.877 03:16:06 -- setup/common.sh@33 -- # echo 0 00:02:28.877 03:16:06 -- setup/common.sh@33 -- # return 0 00:02:28.877 03:16:06 -- setup/hugepages.sh@97 -- # anon=0 00:02:28.877 03:16:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:28.877 03:16:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.877 03:16:06 -- setup/common.sh@18 -- # local node= 00:02:28.877 03:16:06 -- setup/common.sh@19 -- # local var val 00:02:28.877 03:16:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.877 03:16:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.877 03:16:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.877 03:16:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.877 03:16:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.877 03:16:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38312836 kB' 'MemAvailable: 42006624 kB' 'Buffers: 2696 kB' 'Cached: 17767364 kB' 'SwapCached: 0 kB' 'Active: 14725900 kB' 'Inactive: 3498348 kB' 'Active(anon): 14138728 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457092 kB' 'Mapped: 210480 kB' 'Shmem: 13684540 kB' 'KReclaimable: 204212 kB' 'Slab: 584316 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380104 kB' 'KernelStack: 12896 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15299768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.877 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.877 03:16:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.878 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.878 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.879 03:16:06 -- setup/common.sh@33 -- # echo 0 00:02:28.879 03:16:06 -- setup/common.sh@33 -- # return 0 00:02:28.879 03:16:06 -- setup/hugepages.sh@99 -- # surp=0 00:02:28.879 03:16:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:28.879 03:16:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:28.879 03:16:06 -- setup/common.sh@18 -- # local node= 00:02:28.879 03:16:06 -- setup/common.sh@19 -- # local var val 00:02:28.879 03:16:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.879 03:16:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.879 03:16:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.879 03:16:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.879 03:16:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.879 03:16:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38311588 kB' 'MemAvailable: 42005376 kB' 'Buffers: 2696 kB' 'Cached: 17767376 kB' 'SwapCached: 0 kB' 'Active: 14725452 kB' 'Inactive: 3498348 kB' 'Active(anon): 14138280 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456156 kB' 'Mapped: 210480 kB' 'Shmem: 13684552 kB' 'KReclaimable: 204212 kB' 'Slab: 584300 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380088 kB' 'KernelStack: 12832 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15299784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.879 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.879 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.880 03:16:06 -- setup/common.sh@33 -- # echo 0 00:02:28.880 03:16:06 -- setup/common.sh@33 -- # return 0 00:02:28.880 03:16:06 -- setup/hugepages.sh@100 -- # resv=0 00:02:28.880 03:16:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:28.880 nr_hugepages=1024 00:02:28.880 03:16:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:28.880 resv_hugepages=0 00:02:28.880 03:16:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:28.880 surplus_hugepages=0 00:02:28.880 03:16:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:28.880 anon_hugepages=0 00:02:28.880 03:16:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.880 03:16:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:28.880 03:16:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:28.880 03:16:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:28.880 03:16:06 -- setup/common.sh@18 -- # local node= 00:02:28.880 03:16:06 -- setup/common.sh@19 -- # local var val 00:02:28.880 03:16:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.880 03:16:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.880 03:16:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.880 03:16:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.880 03:16:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.880 03:16:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38312240 kB' 'MemAvailable: 42006028 kB' 'Buffers: 2696 kB' 'Cached: 17767392 kB' 'SwapCached: 0 kB' 'Active: 14724968 kB' 'Inactive: 3498348 kB' 'Active(anon): 14137796 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456500 kB' 'Mapped: 210480 kB' 'Shmem: 13684568 kB' 'KReclaimable: 204212 kB' 'Slab: 584400 kB' 'SReclaimable: 204212 kB' 'SUnreclaim: 380188 kB' 'KernelStack: 12896 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15299928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.880 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.880 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.881 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.881 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.882 03:16:06 -- setup/common.sh@33 -- # echo 1024 00:02:28.882 03:16:06 -- setup/common.sh@33 -- # return 0 00:02:28.882 03:16:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.882 03:16:06 -- setup/hugepages.sh@112 -- # get_nodes 00:02:28.882 03:16:06 -- setup/hugepages.sh@27 -- # local node 00:02:28.882 03:16:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.882 03:16:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:28.882 03:16:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.882 03:16:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:28.882 03:16:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:28.882 03:16:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:28.882 03:16:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:28.882 03:16:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:28.882 03:16:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:28.882 03:16:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.882 03:16:06 -- setup/common.sh@18 -- # local node=0 00:02:28.882 03:16:06 -- setup/common.sh@19 -- # local var val 00:02:28.882 03:16:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.882 03:16:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.882 03:16:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:28.882 03:16:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:28.882 03:16:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.882 03:16:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19867272 kB' 'MemUsed: 13009668 kB' 'SwapCached: 0 kB' 'Active: 7598480 kB' 'Inactive: 3329304 kB' 'Active(anon): 7365876 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10762956 kB' 'Mapped: 105156 kB' 'AnonPages: 168032 kB' 'Shmem: 7201048 kB' 'KernelStack: 6088 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327840 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.882 03:16:06 -- setup/common.sh@32 -- # continue 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.882 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@33 -- # echo 0 00:02:29.143 03:16:06 -- setup/common.sh@33 -- # return 0 00:02:29.143 03:16:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:29.143 03:16:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:29.143 03:16:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:29.143 03:16:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:29.143 03:16:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:29.143 03:16:06 -- setup/common.sh@18 -- # local node=1 00:02:29.143 03:16:06 -- setup/common.sh@19 -- # local var val 00:02:29.143 03:16:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:29.143 03:16:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.143 03:16:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:29.143 03:16:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:29.143 03:16:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.143 03:16:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 18444928 kB' 'MemUsed: 9219848 kB' 'SwapCached: 0 kB' 'Active: 7126148 kB' 'Inactive: 169044 kB' 'Active(anon): 6771580 kB' 'Inactive(anon): 0 kB' 'Active(file): 354568 kB' 'Inactive(file): 169044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7007164 kB' 'Mapped: 105312 kB' 'AnonPages: 288120 kB' 'Shmem: 6483552 kB' 'KernelStack: 6808 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98316 kB' 'Slab: 256560 kB' 'SReclaimable: 98316 kB' 'SUnreclaim: 158244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.143 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.143 03:16:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # continue 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.144 03:16:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.144 03:16:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.144 03:16:06 -- setup/common.sh@33 -- # echo 0 00:02:29.144 03:16:06 -- setup/common.sh@33 -- # return 0 00:02:29.144 03:16:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:29.144 03:16:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:29.144 03:16:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:29.144 03:16:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:29.144 03:16:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:29.144 node0=512 expecting 512 00:02:29.144 03:16:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:29.144 03:16:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:29.144 03:16:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:29.144 03:16:06 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:29.144 node1=512 expecting 512 00:02:29.144 03:16:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:29.144 00:02:29.144 real 0m1.491s 00:02:29.144 user 0m0.615s 00:02:29.144 sys 0m0.831s 00:02:29.144 03:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:29.144 03:16:06 -- common/autotest_common.sh@10 -- # set +x 00:02:29.144 ************************************ 00:02:29.144 END TEST even_2G_alloc 00:02:29.144 ************************************ 00:02:29.144 03:16:06 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:29.144 03:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:29.144 03:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:29.144 03:16:06 -- common/autotest_common.sh@10 -- # set +x 00:02:29.144 ************************************ 00:02:29.144 START TEST odd_alloc 00:02:29.144 ************************************ 00:02:29.144 03:16:06 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:29.144 03:16:06 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:29.145 03:16:06 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:29.145 03:16:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:29.145 03:16:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:29.145 03:16:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:29.145 03:16:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:29.145 03:16:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:29.145 03:16:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:29.145 03:16:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:29.145 03:16:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:29.145 03:16:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:29.145 03:16:06 -- setup/hugepages.sh@83 -- # : 513 00:02:29.145 03:16:06 -- setup/hugepages.sh@84 -- # : 1 00:02:29.145 03:16:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:29.145 03:16:06 -- setup/hugepages.sh@83 -- # : 0 00:02:29.145 03:16:06 -- setup/hugepages.sh@84 -- # : 0 00:02:29.145 03:16:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:29.145 03:16:06 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:29.145 03:16:06 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:29.145 03:16:06 -- setup/hugepages.sh@160 -- # setup output 00:02:29.145 03:16:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.145 03:16:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:30.524 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:30.524 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:30.524 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:30.524 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:30.524 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:30.524 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:30.524 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:30.524 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:30.524 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:30.524 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:30.524 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:30.524 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:30.524 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:30.524 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:30.524 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:30.524 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:30.524 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:30.524 03:16:07 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:30.524 03:16:07 -- setup/hugepages.sh@89 -- # local node 00:02:30.524 03:16:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:30.524 03:16:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:30.524 03:16:07 -- setup/hugepages.sh@92 -- # local surp 00:02:30.524 03:16:07 -- setup/hugepages.sh@93 -- # local resv 00:02:30.524 03:16:07 -- setup/hugepages.sh@94 -- # local anon 00:02:30.524 03:16:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:30.524 03:16:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:30.525 03:16:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:30.525 03:16:07 -- setup/common.sh@18 -- # local node= 00:02:30.525 03:16:07 -- setup/common.sh@19 -- # local var val 00:02:30.525 03:16:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.525 03:16:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.525 03:16:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.525 03:16:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.525 03:16:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.525 03:16:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38337300 kB' 'MemAvailable: 42031072 kB' 'Buffers: 2696 kB' 'Cached: 17767464 kB' 'SwapCached: 0 kB' 'Active: 14722992 kB' 'Inactive: 3498348 kB' 'Active(anon): 14135820 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454412 kB' 'Mapped: 209892 kB' 'Shmem: 13684640 kB' 'KReclaimable: 204180 kB' 'Slab: 584168 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379988 kB' 'KernelStack: 12784 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 15271108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.525 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.525 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.526 03:16:07 -- setup/common.sh@33 -- # echo 0 00:02:30.526 03:16:07 -- setup/common.sh@33 -- # return 0 00:02:30.526 03:16:07 -- setup/hugepages.sh@97 -- # anon=0 00:02:30.526 03:16:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:30.526 03:16:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.526 03:16:07 -- setup/common.sh@18 -- # local node= 00:02:30.526 03:16:07 -- setup/common.sh@19 -- # local var val 00:02:30.526 03:16:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.526 03:16:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.526 03:16:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.526 03:16:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.526 03:16:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.526 03:16:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38337724 kB' 'MemAvailable: 42031496 kB' 'Buffers: 2696 kB' 'Cached: 17767468 kB' 'SwapCached: 0 kB' 'Active: 14723732 kB' 'Inactive: 3498348 kB' 'Active(anon): 14136560 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455156 kB' 'Mapped: 210236 kB' 'Shmem: 13684644 kB' 'KReclaimable: 204180 kB' 'Slab: 584156 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379976 kB' 'KernelStack: 12800 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 15271120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196572 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.526 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.526 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.527 03:16:07 -- setup/common.sh@33 -- # echo 0 00:02:30.527 03:16:07 -- setup/common.sh@33 -- # return 0 00:02:30.527 03:16:07 -- setup/hugepages.sh@99 -- # surp=0 00:02:30.527 03:16:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:30.527 03:16:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:30.527 03:16:07 -- setup/common.sh@18 -- # local node= 00:02:30.527 03:16:07 -- setup/common.sh@19 -- # local var val 00:02:30.527 03:16:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.527 03:16:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.527 03:16:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.527 03:16:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.527 03:16:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.527 03:16:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38337756 kB' 'MemAvailable: 42031528 kB' 'Buffers: 2696 kB' 'Cached: 17767484 kB' 'SwapCached: 0 kB' 'Active: 14718688 kB' 'Inactive: 3498348 kB' 'Active(anon): 14131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450100 kB' 'Mapped: 210208 kB' 'Shmem: 13684660 kB' 'KReclaimable: 204180 kB' 'Slab: 584156 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379976 kB' 'KernelStack: 12784 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 15267164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.527 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.527 03:16:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.528 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.528 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.529 03:16:07 -- setup/common.sh@33 -- # echo 0 00:02:30.529 03:16:07 -- setup/common.sh@33 -- # return 0 00:02:30.529 03:16:07 -- setup/hugepages.sh@100 -- # resv=0 00:02:30.529 03:16:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:30.529 nr_hugepages=1025 00:02:30.529 03:16:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:30.529 resv_hugepages=0 00:02:30.529 03:16:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:30.529 surplus_hugepages=0 00:02:30.529 03:16:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:30.529 anon_hugepages=0 00:02:30.529 03:16:07 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:30.529 03:16:07 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:30.529 03:16:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:30.529 03:16:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:30.529 03:16:07 -- setup/common.sh@18 -- # local node= 00:02:30.529 03:16:07 -- setup/common.sh@19 -- # local var val 00:02:30.529 03:16:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.529 03:16:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.529 03:16:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.529 03:16:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.529 03:16:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.529 03:16:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38334664 kB' 'MemAvailable: 42028436 kB' 'Buffers: 2696 kB' 'Cached: 17767492 kB' 'SwapCached: 0 kB' 'Active: 14722736 kB' 'Inactive: 3498348 kB' 'Active(anon): 14135564 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454104 kB' 'Mapped: 209864 kB' 'Shmem: 13684668 kB' 'KReclaimable: 204180 kB' 'Slab: 584156 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379976 kB' 'KernelStack: 12800 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 15271148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196588 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.529 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.529 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.530 03:16:07 -- setup/common.sh@33 -- # echo 1025 00:02:30.530 03:16:07 -- setup/common.sh@33 -- # return 0 00:02:30.530 03:16:07 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:30.530 03:16:07 -- setup/hugepages.sh@112 -- # get_nodes 00:02:30.530 03:16:07 -- setup/hugepages.sh@27 -- # local node 00:02:30.530 03:16:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.530 03:16:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:30.530 03:16:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.530 03:16:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:30.530 03:16:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:30.530 03:16:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:30.530 03:16:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:30.530 03:16:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:30.530 03:16:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:30.530 03:16:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.530 03:16:07 -- setup/common.sh@18 -- # local node=0 00:02:30.530 03:16:07 -- setup/common.sh@19 -- # local var val 00:02:30.530 03:16:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.530 03:16:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.530 03:16:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:30.530 03:16:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:30.530 03:16:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.530 03:16:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19879224 kB' 'MemUsed: 12997716 kB' 'SwapCached: 0 kB' 'Active: 7598192 kB' 'Inactive: 3329304 kB' 'Active(anon): 7365588 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10762968 kB' 'Mapped: 104244 kB' 'AnonPages: 167676 kB' 'Shmem: 7201060 kB' 'KernelStack: 5944 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327680 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.530 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.530 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.531 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.531 03:16:07 -- setup/common.sh@33 -- # echo 0 00:02:30.531 03:16:07 -- setup/common.sh@33 -- # return 0 00:02:30.531 03:16:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:30.531 03:16:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:30.531 03:16:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:30.531 03:16:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:30.531 03:16:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.531 03:16:07 -- setup/common.sh@18 -- # local node=1 00:02:30.531 03:16:07 -- setup/common.sh@19 -- # local var val 00:02:30.531 03:16:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.531 03:16:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.531 03:16:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:30.531 03:16:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:30.531 03:16:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.531 03:16:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.531 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 18455552 kB' 'MemUsed: 9209224 kB' 'SwapCached: 0 kB' 'Active: 7124612 kB' 'Inactive: 169044 kB' 'Active(anon): 6770044 kB' 'Inactive(anon): 0 kB' 'Active(file): 354568 kB' 'Inactive(file): 169044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7007236 kB' 'Mapped: 105600 kB' 'AnonPages: 286512 kB' 'Shmem: 6483624 kB' 'KernelStack: 6840 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98284 kB' 'Slab: 256476 kB' 'SReclaimable: 98284 kB' 'SUnreclaim: 158192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # continue 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.532 03:16:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.532 03:16:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.532 03:16:07 -- setup/common.sh@33 -- # echo 0 00:02:30.532 03:16:07 -- setup/common.sh@33 -- # return 0 00:02:30.532 03:16:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:30.532 03:16:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:30.532 03:16:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:30.532 03:16:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:30.532 03:16:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:30.532 node0=512 expecting 513 00:02:30.532 03:16:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:30.532 03:16:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:30.532 03:16:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:30.532 03:16:07 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:30.532 node1=513 expecting 512 00:02:30.532 03:16:07 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:30.532 00:02:30.532 real 0m1.378s 00:02:30.532 user 0m0.607s 00:02:30.532 sys 0m0.735s 00:02:30.533 03:16:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:30.533 03:16:07 -- common/autotest_common.sh@10 -- # set +x 00:02:30.533 ************************************ 00:02:30.533 END TEST odd_alloc 00:02:30.533 ************************************ 00:02:30.533 03:16:07 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:30.533 03:16:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.533 03:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.533 03:16:07 -- common/autotest_common.sh@10 -- # set +x 00:02:30.791 ************************************ 00:02:30.791 START TEST custom_alloc 00:02:30.791 ************************************ 00:02:30.791 03:16:08 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:30.791 03:16:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:30.791 03:16:08 -- setup/hugepages.sh@169 -- # local node 00:02:30.791 03:16:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:30.791 03:16:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:30.791 03:16:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:30.791 03:16:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:30.791 03:16:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:30.791 03:16:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:30.791 03:16:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:30.791 03:16:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:30.791 03:16:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.791 03:16:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:30.791 03:16:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.791 03:16:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.791 03:16:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.791 03:16:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:30.791 03:16:08 -- setup/hugepages.sh@83 -- # : 256 00:02:30.791 03:16:08 -- setup/hugepages.sh@84 -- # : 1 00:02:30.791 03:16:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:30.791 03:16:08 -- setup/hugepages.sh@83 -- # : 0 00:02:30.791 03:16:08 -- setup/hugepages.sh@84 -- # : 0 00:02:30.791 03:16:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:30.791 03:16:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:30.791 03:16:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:30.791 03:16:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:30.791 03:16:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:30.791 03:16:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:30.791 03:16:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.791 03:16:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:30.791 03:16:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.791 03:16:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.791 03:16:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.791 03:16:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:30.791 03:16:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:30.791 03:16:08 -- setup/hugepages.sh@78 -- # return 0 00:02:30.791 03:16:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:30.791 03:16:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:30.791 03:16:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:30.791 03:16:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:30.791 03:16:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:30.792 03:16:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:30.792 03:16:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:30.792 03:16:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:30.792 03:16:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:30.792 03:16:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.792 03:16:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:30.792 03:16:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.792 03:16:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.792 03:16:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.792 03:16:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:30.792 03:16:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:30.792 03:16:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:30.792 03:16:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:30.792 03:16:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:30.792 03:16:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:30.792 03:16:08 -- setup/hugepages.sh@78 -- # return 0 00:02:30.792 03:16:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:30.792 03:16:08 -- setup/hugepages.sh@187 -- # setup output 00:02:30.792 03:16:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.792 03:16:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:31.729 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:31.729 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:31.729 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:31.729 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:31.729 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:31.729 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:31.729 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:31.729 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:31.729 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:31.729 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:31.729 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:31.729 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:31.729 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:31.729 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:31.729 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:31.729 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:31.729 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:31.994 03:16:09 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:31.994 03:16:09 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:31.994 03:16:09 -- setup/hugepages.sh@89 -- # local node 00:02:31.994 03:16:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:31.994 03:16:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:31.994 03:16:09 -- setup/hugepages.sh@92 -- # local surp 00:02:31.994 03:16:09 -- setup/hugepages.sh@93 -- # local resv 00:02:31.994 03:16:09 -- setup/hugepages.sh@94 -- # local anon 00:02:31.994 03:16:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:31.994 03:16:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:31.994 03:16:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:31.994 03:16:09 -- setup/common.sh@18 -- # local node= 00:02:31.994 03:16:09 -- setup/common.sh@19 -- # local var val 00:02:31.994 03:16:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.994 03:16:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.994 03:16:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.994 03:16:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.994 03:16:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.994 03:16:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 37287876 kB' 'MemAvailable: 40981648 kB' 'Buffers: 2696 kB' 'Cached: 17767560 kB' 'SwapCached: 0 kB' 'Active: 14718156 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449524 kB' 'Mapped: 209584 kB' 'Shmem: 13684736 kB' 'KReclaimable: 204180 kB' 'Slab: 583860 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379680 kB' 'KernelStack: 12784 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 15265380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.994 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.994 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.995 03:16:09 -- setup/common.sh@33 -- # echo 0 00:02:31.995 03:16:09 -- setup/common.sh@33 -- # return 0 00:02:31.995 03:16:09 -- setup/hugepages.sh@97 -- # anon=0 00:02:31.995 03:16:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:31.995 03:16:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.995 03:16:09 -- setup/common.sh@18 -- # local node= 00:02:31.995 03:16:09 -- setup/common.sh@19 -- # local var val 00:02:31.995 03:16:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.995 03:16:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.995 03:16:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.995 03:16:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.995 03:16:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.995 03:16:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 37287624 kB' 'MemAvailable: 40981396 kB' 'Buffers: 2696 kB' 'Cached: 17767560 kB' 'SwapCached: 0 kB' 'Active: 14718240 kB' 'Inactive: 3498348 kB' 'Active(anon): 14131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449696 kB' 'Mapped: 209568 kB' 'Shmem: 13684736 kB' 'KReclaimable: 204180 kB' 'Slab: 583836 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379656 kB' 'KernelStack: 12848 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 15265392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.995 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.995 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.996 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.996 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.997 03:16:09 -- setup/common.sh@33 -- # echo 0 00:02:31.997 03:16:09 -- setup/common.sh@33 -- # return 0 00:02:31.997 03:16:09 -- setup/hugepages.sh@99 -- # surp=0 00:02:31.997 03:16:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:31.997 03:16:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:31.997 03:16:09 -- setup/common.sh@18 -- # local node= 00:02:31.997 03:16:09 -- setup/common.sh@19 -- # local var val 00:02:31.997 03:16:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.997 03:16:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.997 03:16:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.997 03:16:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.997 03:16:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.997 03:16:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 37288320 kB' 'MemAvailable: 40982092 kB' 'Buffers: 2696 kB' 'Cached: 17767572 kB' 'SwapCached: 0 kB' 'Active: 14717572 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130400 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448996 kB' 'Mapped: 209452 kB' 'Shmem: 13684748 kB' 'KReclaimable: 204180 kB' 'Slab: 583816 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379636 kB' 'KernelStack: 12832 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 15265408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.997 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.997 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.998 03:16:09 -- setup/common.sh@33 -- # echo 0 00:02:31.998 03:16:09 -- setup/common.sh@33 -- # return 0 00:02:31.998 03:16:09 -- setup/hugepages.sh@100 -- # resv=0 00:02:31.998 03:16:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:31.998 nr_hugepages=1536 00:02:31.998 03:16:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:31.998 resv_hugepages=0 00:02:31.998 03:16:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:31.998 surplus_hugepages=0 00:02:31.998 03:16:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:31.998 anon_hugepages=0 00:02:31.998 03:16:09 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:31.998 03:16:09 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:31.998 03:16:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:31.998 03:16:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:31.998 03:16:09 -- setup/common.sh@18 -- # local node= 00:02:31.998 03:16:09 -- setup/common.sh@19 -- # local var val 00:02:31.998 03:16:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.998 03:16:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.998 03:16:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.998 03:16:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.998 03:16:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.998 03:16:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 37289016 kB' 'MemAvailable: 40982788 kB' 'Buffers: 2696 kB' 'Cached: 17767572 kB' 'SwapCached: 0 kB' 'Active: 14717272 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130100 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448696 kB' 'Mapped: 209452 kB' 'Shmem: 13684748 kB' 'KReclaimable: 204180 kB' 'Slab: 583816 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379636 kB' 'KernelStack: 12832 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 15265420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.998 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.998 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.999 03:16:09 -- setup/common.sh@32 -- # continue 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.999 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.000 03:16:09 -- setup/common.sh@33 -- # echo 1536 00:02:32.000 03:16:09 -- setup/common.sh@33 -- # return 0 00:02:32.000 03:16:09 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:32.000 03:16:09 -- setup/hugepages.sh@112 -- # get_nodes 00:02:32.000 03:16:09 -- setup/hugepages.sh@27 -- # local node 00:02:32.000 03:16:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.000 03:16:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:32.000 03:16:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.000 03:16:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:32.000 03:16:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.000 03:16:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.000 03:16:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.000 03:16:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.000 03:16:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:32.000 03:16:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.000 03:16:09 -- setup/common.sh@18 -- # local node=0 00:02:32.000 03:16:09 -- setup/common.sh@19 -- # local var val 00:02:32.000 03:16:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.000 03:16:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.000 03:16:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:32.000 03:16:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:32.000 03:16:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.000 03:16:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19877632 kB' 'MemUsed: 12999308 kB' 'SwapCached: 0 kB' 'Active: 7592688 kB' 'Inactive: 3329304 kB' 'Active(anon): 7360084 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10762980 kB' 'Mapped: 104092 kB' 'AnonPages: 162260 kB' 'Shmem: 7201072 kB' 'KernelStack: 5944 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327352 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.000 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.000 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@33 -- # echo 0 00:02:32.001 03:16:09 -- setup/common.sh@33 -- # return 0 00:02:32.001 03:16:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.001 03:16:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.001 03:16:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.001 03:16:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:32.001 03:16:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.001 03:16:09 -- setup/common.sh@18 -- # local node=1 00:02:32.001 03:16:09 -- setup/common.sh@19 -- # local var val 00:02:32.001 03:16:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.001 03:16:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.001 03:16:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:32.001 03:16:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:32.001 03:16:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.001 03:16:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 17412676 kB' 'MemUsed: 10252100 kB' 'SwapCached: 0 kB' 'Active: 7124924 kB' 'Inactive: 169044 kB' 'Active(anon): 6770356 kB' 'Inactive(anon): 0 kB' 'Active(file): 354568 kB' 'Inactive(file): 169044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7007332 kB' 'Mapped: 105360 kB' 'AnonPages: 286736 kB' 'Shmem: 6483720 kB' 'KernelStack: 6888 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98284 kB' 'Slab: 256464 kB' 'SReclaimable: 98284 kB' 'SUnreclaim: 158180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.001 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.001 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # continue 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.002 03:16:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.002 03:16:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.002 03:16:09 -- setup/common.sh@33 -- # echo 0 00:02:32.002 03:16:09 -- setup/common.sh@33 -- # return 0 00:02:32.002 03:16:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.002 03:16:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.002 03:16:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.002 03:16:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.002 03:16:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:32.002 node0=512 expecting 512 00:02:32.002 03:16:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.002 03:16:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.002 03:16:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.002 03:16:09 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:32.002 node1=1024 expecting 1024 00:02:32.002 03:16:09 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:32.002 00:02:32.002 real 0m1.445s 00:02:32.002 user 0m0.631s 00:02:32.002 sys 0m0.777s 00:02:32.002 03:16:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:32.002 03:16:09 -- common/autotest_common.sh@10 -- # set +x 00:02:32.002 ************************************ 00:02:32.002 END TEST custom_alloc 00:02:32.002 ************************************ 00:02:32.260 03:16:09 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:32.260 03:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:32.260 03:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:32.260 03:16:09 -- common/autotest_common.sh@10 -- # set +x 00:02:32.260 ************************************ 00:02:32.260 START TEST no_shrink_alloc 00:02:32.260 ************************************ 00:02:32.260 03:16:09 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:32.260 03:16:09 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:32.260 03:16:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:32.260 03:16:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:32.260 03:16:09 -- setup/hugepages.sh@51 -- # shift 00:02:32.260 03:16:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:32.260 03:16:09 -- setup/hugepages.sh@52 -- # local node_ids 00:02:32.260 03:16:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.260 03:16:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:32.260 03:16:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:32.260 03:16:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:32.260 03:16:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.260 03:16:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:32.260 03:16:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.260 03:16:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.260 03:16:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.260 03:16:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:32.260 03:16:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:32.260 03:16:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:32.260 03:16:09 -- setup/hugepages.sh@73 -- # return 0 00:02:32.260 03:16:09 -- setup/hugepages.sh@198 -- # setup output 00:02:32.260 03:16:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.260 03:16:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:33.642 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:33.642 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:33.642 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:33.642 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:33.642 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:33.642 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:33.642 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:33.642 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:33.642 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:33.642 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:33.642 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:33.642 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:33.642 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:33.642 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:33.642 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:33.642 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:33.642 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:33.642 03:16:10 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:33.642 03:16:10 -- setup/hugepages.sh@89 -- # local node 00:02:33.642 03:16:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:33.642 03:16:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:33.642 03:16:10 -- setup/hugepages.sh@92 -- # local surp 00:02:33.642 03:16:10 -- setup/hugepages.sh@93 -- # local resv 00:02:33.642 03:16:10 -- setup/hugepages.sh@94 -- # local anon 00:02:33.642 03:16:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:33.642 03:16:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:33.642 03:16:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:33.642 03:16:10 -- setup/common.sh@18 -- # local node= 00:02:33.642 03:16:10 -- setup/common.sh@19 -- # local var val 00:02:33.642 03:16:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.642 03:16:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.642 03:16:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.642 03:16:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.642 03:16:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.642 03:16:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38338520 kB' 'MemAvailable: 42032292 kB' 'Buffers: 2696 kB' 'Cached: 17767660 kB' 'SwapCached: 0 kB' 'Active: 14717500 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130328 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448688 kB' 'Mapped: 209584 kB' 'Shmem: 13684836 kB' 'KReclaimable: 204180 kB' 'Slab: 583828 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379648 kB' 'KernelStack: 12832 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15265608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.642 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.642 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.643 03:16:10 -- setup/common.sh@33 -- # echo 0 00:02:33.643 03:16:10 -- setup/common.sh@33 -- # return 0 00:02:33.643 03:16:10 -- setup/hugepages.sh@97 -- # anon=0 00:02:33.643 03:16:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:33.643 03:16:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.643 03:16:10 -- setup/common.sh@18 -- # local node= 00:02:33.643 03:16:10 -- setup/common.sh@19 -- # local var val 00:02:33.643 03:16:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.643 03:16:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.643 03:16:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.643 03:16:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.643 03:16:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.643 03:16:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38339408 kB' 'MemAvailable: 42033180 kB' 'Buffers: 2696 kB' 'Cached: 17767664 kB' 'SwapCached: 0 kB' 'Active: 14717200 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130028 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448448 kB' 'Mapped: 209556 kB' 'Shmem: 13684840 kB' 'KReclaimable: 204180 kB' 'Slab: 583824 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379644 kB' 'KernelStack: 12832 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15265620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.643 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.643 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.643 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.643 03:16:11 -- setup/common.sh@33 -- # echo 0 00:02:33.643 03:16:11 -- setup/common.sh@33 -- # return 0 00:02:33.643 03:16:11 -- setup/hugepages.sh@99 -- # surp=0 00:02:33.643 03:16:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:33.643 03:16:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:33.643 03:16:11 -- setup/common.sh@18 -- # local node= 00:02:33.643 03:16:11 -- setup/common.sh@19 -- # local var val 00:02:33.644 03:16:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.644 03:16:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.644 03:16:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.644 03:16:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.644 03:16:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.644 03:16:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38341200 kB' 'MemAvailable: 42034972 kB' 'Buffers: 2696 kB' 'Cached: 17767676 kB' 'SwapCached: 0 kB' 'Active: 14717680 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130508 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448660 kB' 'Mapped: 209488 kB' 'Shmem: 13684852 kB' 'KReclaimable: 204180 kB' 'Slab: 583832 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379652 kB' 'KernelStack: 12848 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15268040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.644 03:16:11 -- setup/common.sh@33 -- # echo 0 00:02:33.644 03:16:11 -- setup/common.sh@33 -- # return 0 00:02:33.644 03:16:11 -- setup/hugepages.sh@100 -- # resv=0 00:02:33.644 03:16:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:33.644 nr_hugepages=1024 00:02:33.644 03:16:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:33.644 resv_hugepages=0 00:02:33.644 03:16:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:33.644 surplus_hugepages=0 00:02:33.644 03:16:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:33.644 anon_hugepages=0 00:02:33.644 03:16:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:33.644 03:16:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:33.644 03:16:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:33.644 03:16:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:33.644 03:16:11 -- setup/common.sh@18 -- # local node= 00:02:33.644 03:16:11 -- setup/common.sh@19 -- # local var val 00:02:33.644 03:16:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.644 03:16:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.644 03:16:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.644 03:16:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.644 03:16:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.644 03:16:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38343508 kB' 'MemAvailable: 42037280 kB' 'Buffers: 2696 kB' 'Cached: 17767688 kB' 'SwapCached: 0 kB' 'Active: 14718120 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130948 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449032 kB' 'Mapped: 209488 kB' 'Shmem: 13684864 kB' 'KReclaimable: 204180 kB' 'Slab: 583792 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379612 kB' 'KernelStack: 13088 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15266664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196792 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.644 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.644 03:16:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.645 03:16:11 -- setup/common.sh@33 -- # echo 1024 00:02:33.645 03:16:11 -- setup/common.sh@33 -- # return 0 00:02:33.645 03:16:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:33.645 03:16:11 -- setup/hugepages.sh@112 -- # get_nodes 00:02:33.645 03:16:11 -- setup/hugepages.sh@27 -- # local node 00:02:33.645 03:16:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.645 03:16:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:33.645 03:16:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.645 03:16:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:33.645 03:16:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.645 03:16:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.645 03:16:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:33.645 03:16:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:33.645 03:16:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:33.645 03:16:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.645 03:16:11 -- setup/common.sh@18 -- # local node=0 00:02:33.645 03:16:11 -- setup/common.sh@19 -- # local var val 00:02:33.645 03:16:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.645 03:16:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.645 03:16:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:33.645 03:16:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:33.645 03:16:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.645 03:16:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18834240 kB' 'MemUsed: 14042700 kB' 'SwapCached: 0 kB' 'Active: 7595028 kB' 'Inactive: 3329304 kB' 'Active(anon): 7362424 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10763000 kB' 'Mapped: 104092 kB' 'AnonPages: 164216 kB' 'Shmem: 7201092 kB' 'KernelStack: 6360 kB' 'PageTables: 5968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327300 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.645 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.645 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # continue 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.646 03:16:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.646 03:16:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.646 03:16:11 -- setup/common.sh@33 -- # echo 0 00:02:33.646 03:16:11 -- setup/common.sh@33 -- # return 0 00:02:33.646 03:16:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:33.646 03:16:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:33.646 03:16:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:33.646 03:16:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:33.646 03:16:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:33.646 node0=1024 expecting 1024 00:02:33.646 03:16:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:33.646 03:16:11 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:33.646 03:16:11 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:33.646 03:16:11 -- setup/hugepages.sh@202 -- # setup output 00:02:33.646 03:16:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.646 03:16:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:35.023 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.023 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:35.023 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.023 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.023 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.023 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.023 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.023 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.023 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.023 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.023 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.023 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.023 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.023 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.023 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.023 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.023 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.023 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:35.023 03:16:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:35.023 03:16:12 -- setup/hugepages.sh@89 -- # local node 00:02:35.023 03:16:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.023 03:16:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.023 03:16:12 -- setup/hugepages.sh@92 -- # local surp 00:02:35.023 03:16:12 -- setup/hugepages.sh@93 -- # local resv 00:02:35.023 03:16:12 -- setup/hugepages.sh@94 -- # local anon 00:02:35.023 03:16:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.023 03:16:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.023 03:16:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.023 03:16:12 -- setup/common.sh@18 -- # local node= 00:02:35.023 03:16:12 -- setup/common.sh@19 -- # local var val 00:02:35.023 03:16:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.023 03:16:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.023 03:16:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.023 03:16:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.023 03:16:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.023 03:16:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38334396 kB' 'MemAvailable: 42028168 kB' 'Buffers: 2696 kB' 'Cached: 17767732 kB' 'SwapCached: 0 kB' 'Active: 14716888 kB' 'Inactive: 3498348 kB' 'Active(anon): 14129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447984 kB' 'Mapped: 209508 kB' 'Shmem: 13684908 kB' 'KReclaimable: 204180 kB' 'Slab: 584084 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379904 kB' 'KernelStack: 12816 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15265808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.023 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.023 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.024 03:16:12 -- setup/common.sh@33 -- # echo 0 00:02:35.024 03:16:12 -- setup/common.sh@33 -- # return 0 00:02:35.024 03:16:12 -- setup/hugepages.sh@97 -- # anon=0 00:02:35.024 03:16:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.024 03:16:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.024 03:16:12 -- setup/common.sh@18 -- # local node= 00:02:35.024 03:16:12 -- setup/common.sh@19 -- # local var val 00:02:35.024 03:16:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.024 03:16:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.024 03:16:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.024 03:16:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.024 03:16:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.024 03:16:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.024 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.024 03:16:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38342984 kB' 'MemAvailable: 42036756 kB' 'Buffers: 2696 kB' 'Cached: 17767732 kB' 'SwapCached: 0 kB' 'Active: 14717528 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130356 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448632 kB' 'Mapped: 209508 kB' 'Shmem: 13684908 kB' 'KReclaimable: 204180 kB' 'Slab: 584036 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379856 kB' 'KernelStack: 12800 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15265820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.024 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.025 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.025 03:16:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.026 03:16:12 -- setup/common.sh@33 -- # echo 0 00:02:35.026 03:16:12 -- setup/common.sh@33 -- # return 0 00:02:35.026 03:16:12 -- setup/hugepages.sh@99 -- # surp=0 00:02:35.026 03:16:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.026 03:16:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.026 03:16:12 -- setup/common.sh@18 -- # local node= 00:02:35.026 03:16:12 -- setup/common.sh@19 -- # local var val 00:02:35.026 03:16:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.026 03:16:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.026 03:16:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.026 03:16:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.026 03:16:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.026 03:16:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38342440 kB' 'MemAvailable: 42036212 kB' 'Buffers: 2696 kB' 'Cached: 17767744 kB' 'SwapCached: 0 kB' 'Active: 14717292 kB' 'Inactive: 3498348 kB' 'Active(anon): 14130120 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448368 kB' 'Mapped: 209488 kB' 'Shmem: 13684920 kB' 'KReclaimable: 204180 kB' 'Slab: 584028 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379848 kB' 'KernelStack: 12832 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15265708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.026 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.026 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.027 03:16:12 -- setup/common.sh@33 -- # echo 0 00:02:35.027 03:16:12 -- setup/common.sh@33 -- # return 0 00:02:35.027 03:16:12 -- setup/hugepages.sh@100 -- # resv=0 00:02:35.027 03:16:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:35.027 nr_hugepages=1024 00:02:35.027 03:16:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.027 resv_hugepages=0 00:02:35.027 03:16:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.027 surplus_hugepages=0 00:02:35.027 03:16:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.027 anon_hugepages=0 00:02:35.027 03:16:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.027 03:16:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:35.027 03:16:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.027 03:16:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.027 03:16:12 -- setup/common.sh@18 -- # local node= 00:02:35.027 03:16:12 -- setup/common.sh@19 -- # local var val 00:02:35.027 03:16:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.027 03:16:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.027 03:16:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.027 03:16:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.027 03:16:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.027 03:16:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38338356 kB' 'MemAvailable: 42032128 kB' 'Buffers: 2696 kB' 'Cached: 17767760 kB' 'SwapCached: 0 kB' 'Active: 14720084 kB' 'Inactive: 3498348 kB' 'Active(anon): 14132912 kB' 'Inactive(anon): 0 kB' 'Active(file): 587172 kB' 'Inactive(file): 3498348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451144 kB' 'Mapped: 209924 kB' 'Shmem: 13684936 kB' 'KReclaimable: 204180 kB' 'Slab: 584060 kB' 'SReclaimable: 204180 kB' 'SUnreclaim: 379880 kB' 'KernelStack: 12784 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 15269712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2496092 kB' 'DirectMap2M: 20492288 kB' 'DirectMap1G: 46137344 kB' 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.027 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.027 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.028 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.028 03:16:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.029 03:16:12 -- setup/common.sh@33 -- # echo 1024 00:02:35.029 03:16:12 -- setup/common.sh@33 -- # return 0 00:02:35.029 03:16:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.029 03:16:12 -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.029 03:16:12 -- setup/hugepages.sh@27 -- # local node 00:02:35.029 03:16:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.029 03:16:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:35.029 03:16:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.029 03:16:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:35.029 03:16:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.029 03:16:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.029 03:16:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.029 03:16:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.029 03:16:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.029 03:16:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.029 03:16:12 -- setup/common.sh@18 -- # local node=0 00:02:35.029 03:16:12 -- setup/common.sh@19 -- # local var val 00:02:35.029 03:16:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.029 03:16:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.029 03:16:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.029 03:16:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.029 03:16:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.029 03:16:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18817920 kB' 'MemUsed: 14059020 kB' 'SwapCached: 0 kB' 'Active: 7599496 kB' 'Inactive: 3329304 kB' 'Active(anon): 7366892 kB' 'Inactive(anon): 0 kB' 'Active(file): 232604 kB' 'Inactive(file): 3329304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10763004 kB' 'Mapped: 104248 kB' 'AnonPages: 168928 kB' 'Shmem: 7201096 kB' 'KernelStack: 6008 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105896 kB' 'Slab: 327464 kB' 'SReclaimable: 105896 kB' 'SUnreclaim: 221568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.029 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.029 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # continue 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.030 03:16:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.030 03:16:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.030 03:16:12 -- setup/common.sh@33 -- # echo 0 00:02:35.030 03:16:12 -- setup/common.sh@33 -- # return 0 00:02:35.030 03:16:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.030 03:16:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.030 03:16:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.030 03:16:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.030 03:16:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:35.030 node0=1024 expecting 1024 00:02:35.030 03:16:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:35.030 00:02:35.030 real 0m2.836s 00:02:35.030 user 0m1.131s 00:02:35.030 sys 0m1.628s 00:02:35.030 03:16:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:35.030 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:02:35.030 ************************************ 00:02:35.030 END TEST no_shrink_alloc 00:02:35.030 ************************************ 00:02:35.030 03:16:12 -- setup/hugepages.sh@217 -- # clear_hp 00:02:35.030 03:16:12 -- setup/hugepages.sh@37 -- # local node hp 00:02:35.030 03:16:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:35.030 03:16:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:35.030 03:16:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:35.030 03:16:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:35.030 03:16:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:35.030 03:16:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:35.030 03:16:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:35.030 03:16:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:35.030 03:16:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:35.030 03:16:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:35.030 03:16:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:35.030 03:16:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:35.030 00:02:35.030 real 0m11.836s 00:02:35.030 user 0m4.471s 00:02:35.030 sys 0m6.037s 00:02:35.030 03:16:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:35.030 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:02:35.030 ************************************ 00:02:35.030 END TEST hugepages 00:02:35.030 ************************************ 00:02:35.030 03:16:12 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:35.030 03:16:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:35.030 03:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:35.030 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:02:35.289 ************************************ 00:02:35.289 START TEST driver 00:02:35.289 ************************************ 00:02:35.289 03:16:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:35.289 * Looking for test storage... 00:02:35.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.289 03:16:12 -- setup/driver.sh@68 -- # setup reset 00:02:35.289 03:16:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.289 03:16:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.823 03:16:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:37.823 03:16:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.823 03:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.823 03:16:15 -- common/autotest_common.sh@10 -- # set +x 00:02:37.823 ************************************ 00:02:37.823 START TEST guess_driver 00:02:37.823 ************************************ 00:02:37.823 03:16:15 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:37.823 03:16:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:37.823 03:16:15 -- setup/driver.sh@47 -- # local fail=0 00:02:37.823 03:16:15 -- setup/driver.sh@49 -- # pick_driver 00:02:37.823 03:16:15 -- setup/driver.sh@36 -- # vfio 00:02:37.823 03:16:15 -- setup/driver.sh@21 -- # local iommu_grups 00:02:37.823 03:16:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:37.823 03:16:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:37.823 03:16:15 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:37.823 03:16:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:37.823 03:16:15 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:37.823 03:16:15 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:37.823 03:16:15 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:37.823 03:16:15 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:37.823 03:16:15 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:37.823 03:16:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:37.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:37.823 03:16:15 -- setup/driver.sh@30 -- # return 0 00:02:37.823 03:16:15 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:37.823 03:16:15 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:37.823 03:16:15 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:37.823 03:16:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:37.823 Looking for driver=vfio-pci 00:02:37.823 03:16:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:37.823 03:16:15 -- setup/driver.sh@45 -- # setup output config 00:02:37.823 03:16:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.823 03:16:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:38.758 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.758 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.758 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.758 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.758 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.758 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.758 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.758 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.758 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.758 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.758 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.758 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.758 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.758 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:38.759 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:38.759 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:38.759 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:39.017 03:16:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:39.017 03:16:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:39.017 03:16:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:39.984 03:16:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:39.985 03:16:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:39.985 03:16:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:39.985 03:16:17 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:39.985 03:16:17 -- setup/driver.sh@65 -- # setup reset 00:02:39.985 03:16:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.985 03:16:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.516 00:02:42.516 real 0m4.546s 00:02:42.516 user 0m1.035s 00:02:42.516 sys 0m1.686s 00:02:42.516 03:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:42.516 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:02:42.516 ************************************ 00:02:42.516 END TEST guess_driver 00:02:42.516 ************************************ 00:02:42.516 00:02:42.516 real 0m7.058s 00:02:42.516 user 0m1.611s 00:02:42.516 sys 0m2.766s 00:02:42.516 03:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:42.516 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:02:42.516 ************************************ 00:02:42.516 END TEST driver 00:02:42.516 ************************************ 00:02:42.517 03:16:19 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:42.517 03:16:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:42.517 03:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:42.517 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:02:42.517 ************************************ 00:02:42.517 START TEST devices 00:02:42.517 ************************************ 00:02:42.517 03:16:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:42.517 * Looking for test storage... 00:02:42.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:42.517 03:16:19 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:42.517 03:16:19 -- setup/devices.sh@192 -- # setup reset 00:02:42.517 03:16:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.517 03:16:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.893 03:16:21 -- setup/devices.sh@194 -- # get_zoned_devs 00:02:43.893 03:16:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:43.893 03:16:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:43.893 03:16:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:43.893 03:16:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:43.893 03:16:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:43.893 03:16:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:43.893 03:16:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.893 03:16:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:43.893 03:16:21 -- setup/devices.sh@196 -- # blocks=() 00:02:43.893 03:16:21 -- setup/devices.sh@196 -- # declare -a blocks 00:02:43.893 03:16:21 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:43.893 03:16:21 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:43.893 03:16:21 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:43.893 03:16:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:43.893 03:16:21 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:43.893 03:16:21 -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:43.893 03:16:21 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:43.893 03:16:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:43.893 03:16:21 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:43.893 03:16:21 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:43.893 03:16:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:43.893 No valid GPT data, bailing 00:02:43.893 03:16:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.893 03:16:21 -- scripts/common.sh@391 -- # pt= 00:02:43.893 03:16:21 -- scripts/common.sh@392 -- # return 1 00:02:43.893 03:16:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:43.893 03:16:21 -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:43.893 03:16:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:43.893 03:16:21 -- setup/common.sh@80 -- # echo 1000204886016 00:02:43.893 03:16:21 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:43.893 03:16:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:43.893 03:16:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:43.893 03:16:21 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:43.893 03:16:21 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:43.893 03:16:21 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:43.893 03:16:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.893 03:16:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.893 03:16:21 -- common/autotest_common.sh@10 -- # set +x 00:02:44.152 ************************************ 00:02:44.152 START TEST nvme_mount 00:02:44.152 ************************************ 00:02:44.152 03:16:21 -- common/autotest_common.sh@1111 -- # nvme_mount 00:02:44.152 03:16:21 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:44.152 03:16:21 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:44.152 03:16:21 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:44.152 03:16:21 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:44.152 03:16:21 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:44.152 03:16:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:44.152 03:16:21 -- setup/common.sh@40 -- # local part_no=1 00:02:44.152 03:16:21 -- setup/common.sh@41 -- # local size=1073741824 00:02:44.152 03:16:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:44.152 03:16:21 -- setup/common.sh@44 -- # parts=() 00:02:44.152 03:16:21 -- setup/common.sh@44 -- # local parts 00:02:44.152 03:16:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:44.152 03:16:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:44.152 03:16:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:44.152 03:16:21 -- setup/common.sh@46 -- # (( part++ )) 00:02:44.152 03:16:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:44.152 03:16:21 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:44.152 03:16:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:44.152 03:16:21 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:45.086 Creating new GPT entries in memory. 00:02:45.086 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:45.086 other utilities. 00:02:45.086 03:16:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:45.086 03:16:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:45.086 03:16:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:45.086 03:16:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:45.086 03:16:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:46.020 Creating new GPT entries in memory. 00:02:46.020 The operation has completed successfully. 00:02:46.020 03:16:23 -- setup/common.sh@57 -- # (( part++ )) 00:02:46.020 03:16:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:46.020 03:16:23 -- setup/common.sh@62 -- # wait 119838 00:02:46.020 03:16:23 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:46.020 03:16:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:46.020 03:16:23 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:46.020 03:16:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:46.020 03:16:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:46.278 03:16:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:46.278 03:16:23 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:46.278 03:16:23 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:46.278 03:16:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:46.278 03:16:23 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:46.278 03:16:23 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:46.278 03:16:23 -- setup/devices.sh@53 -- # local found=0 00:02:46.278 03:16:23 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:46.278 03:16:23 -- setup/devices.sh@56 -- # : 00:02:46.278 03:16:23 -- setup/devices.sh@59 -- # local pci status 00:02:46.279 03:16:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:46.279 03:16:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:46.279 03:16:23 -- setup/devices.sh@47 -- # setup output config 00:02:46.279 03:16:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.279 03:16:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.211 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.211 03:16:24 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:47.211 03:16:24 -- setup/devices.sh@63 -- # found=1 00:02:47.211 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.211 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.211 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.212 03:16:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:47.212 03:16:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.470 03:16:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:47.470 03:16:24 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:47.470 03:16:24 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.470 03:16:24 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:47.470 03:16:24 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:47.470 03:16:24 -- setup/devices.sh@110 -- # cleanup_nvme 00:02:47.470 03:16:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.470 03:16:24 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.470 03:16:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:47.470 03:16:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:47.470 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:47.470 03:16:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:47.470 03:16:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:47.728 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:47.728 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:47.728 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:47.728 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:47.728 03:16:25 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:47.728 03:16:25 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:47.728 03:16:25 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.728 03:16:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:47.728 03:16:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:47.728 03:16:25 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.728 03:16:25 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:47.728 03:16:25 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:47.728 03:16:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:47.728 03:16:25 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.728 03:16:25 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:47.728 03:16:25 -- setup/devices.sh@53 -- # local found=0 00:02:47.728 03:16:25 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:47.728 03:16:25 -- setup/devices.sh@56 -- # : 00:02:47.728 03:16:25 -- setup/devices.sh@59 -- # local pci status 00:02:47.728 03:16:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:47.728 03:16:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:47.728 03:16:25 -- setup/devices.sh@47 -- # setup output config 00:02:47.728 03:16:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.728 03:16:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:49.100 03:16:26 -- setup/devices.sh@63 -- # found=1 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.100 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.100 03:16:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:49.100 03:16:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:49.100 03:16:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.100 03:16:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:49.100 03:16:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.100 03:16:26 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.100 03:16:26 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:49.100 03:16:26 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:49.101 03:16:26 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:49.101 03:16:26 -- setup/devices.sh@50 -- # local mount_point= 00:02:49.101 03:16:26 -- setup/devices.sh@51 -- # local test_file= 00:02:49.101 03:16:26 -- setup/devices.sh@53 -- # local found=0 00:02:49.101 03:16:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:49.101 03:16:26 -- setup/devices.sh@59 -- # local pci status 00:02:49.101 03:16:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.101 03:16:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:49.101 03:16:26 -- setup/devices.sh@47 -- # setup output config 00:02:49.101 03:16:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.101 03:16:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:50.475 03:16:27 -- setup/devices.sh@63 -- # found=1 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.475 03:16:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:50.475 03:16:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:50.475 03:16:27 -- setup/devices.sh@68 -- # return 0 00:02:50.475 03:16:27 -- setup/devices.sh@128 -- # cleanup_nvme 00:02:50.475 03:16:27 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.475 03:16:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:50.475 03:16:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:50.475 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:50.475 00:02:50.475 real 0m6.284s 00:02:50.475 user 0m1.444s 00:02:50.475 sys 0m2.414s 00:02:50.475 03:16:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.475 03:16:27 -- common/autotest_common.sh@10 -- # set +x 00:02:50.475 ************************************ 00:02:50.475 END TEST nvme_mount 00:02:50.475 ************************************ 00:02:50.475 03:16:27 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:50.475 03:16:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.475 03:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.475 03:16:27 -- common/autotest_common.sh@10 -- # set +x 00:02:50.475 ************************************ 00:02:50.475 START TEST dm_mount 00:02:50.475 ************************************ 00:02:50.475 03:16:27 -- common/autotest_common.sh@1111 -- # dm_mount 00:02:50.475 03:16:27 -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:50.475 03:16:27 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:50.475 03:16:27 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:50.475 03:16:27 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:50.475 03:16:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:50.475 03:16:27 -- setup/common.sh@40 -- # local part_no=2 00:02:50.475 03:16:27 -- setup/common.sh@41 -- # local size=1073741824 00:02:50.475 03:16:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:50.475 03:16:27 -- setup/common.sh@44 -- # parts=() 00:02:50.475 03:16:27 -- setup/common.sh@44 -- # local parts 00:02:50.475 03:16:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:50.475 03:16:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.475 03:16:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:50.475 03:16:27 -- setup/common.sh@46 -- # (( part++ )) 00:02:50.475 03:16:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.475 03:16:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:50.475 03:16:27 -- setup/common.sh@46 -- # (( part++ )) 00:02:50.475 03:16:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.475 03:16:27 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:50.475 03:16:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:50.475 03:16:27 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:51.408 Creating new GPT entries in memory. 00:02:51.408 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:51.408 other utilities. 00:02:51.408 03:16:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:51.408 03:16:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:51.408 03:16:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:51.408 03:16:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:51.408 03:16:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:52.785 Creating new GPT entries in memory. 00:02:52.785 The operation has completed successfully. 00:02:52.785 03:16:29 -- setup/common.sh@57 -- # (( part++ )) 00:02:52.785 03:16:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:52.785 03:16:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:52.785 03:16:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:52.785 03:16:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:02:53.722 The operation has completed successfully. 00:02:53.722 03:16:30 -- setup/common.sh@57 -- # (( part++ )) 00:02:53.722 03:16:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:53.722 03:16:30 -- setup/common.sh@62 -- # wait 122233 00:02:53.722 03:16:30 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:02:53.722 03:16:30 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:53.722 03:16:30 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:53.722 03:16:30 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:02:53.722 03:16:30 -- setup/devices.sh@160 -- # for t in {1..5} 00:02:53.722 03:16:30 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:53.722 03:16:30 -- setup/devices.sh@161 -- # break 00:02:53.722 03:16:30 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:53.722 03:16:30 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:02:53.722 03:16:30 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:02:53.722 03:16:30 -- setup/devices.sh@166 -- # dm=dm-0 00:02:53.722 03:16:30 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:02:53.722 03:16:30 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:02:53.722 03:16:30 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:53.722 03:16:30 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:02:53.722 03:16:30 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:53.722 03:16:30 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:53.722 03:16:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:02:53.722 03:16:31 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:53.722 03:16:31 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:53.722 03:16:31 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:53.722 03:16:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:02:53.722 03:16:31 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:53.722 03:16:31 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:53.722 03:16:31 -- setup/devices.sh@53 -- # local found=0 00:02:53.722 03:16:31 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:53.722 03:16:31 -- setup/devices.sh@56 -- # : 00:02:53.722 03:16:31 -- setup/devices.sh@59 -- # local pci status 00:02:53.722 03:16:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.722 03:16:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:53.722 03:16:31 -- setup/devices.sh@47 -- # setup output config 00:02:53.722 03:16:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.722 03:16:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:02:54.658 03:16:32 -- setup/devices.sh@63 -- # found=1 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.658 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.658 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.659 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.659 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.659 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.659 03:16:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.659 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.659 03:16:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:54.659 03:16:32 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:02:54.659 03:16:32 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:54.659 03:16:32 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:54.659 03:16:32 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:54.659 03:16:32 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:54.916 03:16:32 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:02:54.916 03:16:32 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:54.916 03:16:32 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:02:54.916 03:16:32 -- setup/devices.sh@50 -- # local mount_point= 00:02:54.916 03:16:32 -- setup/devices.sh@51 -- # local test_file= 00:02:54.916 03:16:32 -- setup/devices.sh@53 -- # local found=0 00:02:54.916 03:16:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:54.916 03:16:32 -- setup/devices.sh@59 -- # local pci status 00:02:54.916 03:16:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.916 03:16:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:54.916 03:16:32 -- setup/devices.sh@47 -- # setup output config 00:02:54.916 03:16:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.917 03:16:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:02:55.850 03:16:33 -- setup/devices.sh@63 -- # found=1 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.850 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.850 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.851 03:16:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.851 03:16:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.109 03:16:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:56.109 03:16:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:56.109 03:16:33 -- setup/devices.sh@68 -- # return 0 00:02:56.109 03:16:33 -- setup/devices.sh@187 -- # cleanup_dm 00:02:56.109 03:16:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:56.109 03:16:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:56.109 03:16:33 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:02:56.109 03:16:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:56.109 03:16:33 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:02:56.109 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:56.109 03:16:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:56.109 03:16:33 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:02:56.109 00:02:56.109 real 0m5.594s 00:02:56.109 user 0m0.908s 00:02:56.109 sys 0m1.569s 00:02:56.109 03:16:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:56.109 03:16:33 -- common/autotest_common.sh@10 -- # set +x 00:02:56.109 ************************************ 00:02:56.109 END TEST dm_mount 00:02:56.109 ************************************ 00:02:56.109 03:16:33 -- setup/devices.sh@1 -- # cleanup 00:02:56.109 03:16:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:02:56.109 03:16:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.109 03:16:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:56.109 03:16:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:56.109 03:16:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:56.109 03:16:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:56.367 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:56.367 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:56.367 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:56.367 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:56.367 03:16:33 -- setup/devices.sh@12 -- # cleanup_dm 00:02:56.367 03:16:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:56.367 03:16:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:56.367 03:16:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:56.367 03:16:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:56.367 03:16:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:02:56.367 03:16:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:02:56.367 00:02:56.367 real 0m13.970s 00:02:56.367 user 0m3.024s 00:02:56.367 sys 0m5.143s 00:02:56.367 03:16:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:56.367 03:16:33 -- common/autotest_common.sh@10 -- # set +x 00:02:56.367 ************************************ 00:02:56.367 END TEST devices 00:02:56.367 ************************************ 00:02:56.367 00:02:56.367 real 0m43.585s 00:02:56.367 user 0m12.478s 00:02:56.367 sys 0m19.424s 00:02:56.367 03:16:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:56.367 03:16:33 -- common/autotest_common.sh@10 -- # set +x 00:02:56.367 ************************************ 00:02:56.367 END TEST setup.sh 00:02:56.367 ************************************ 00:02:56.367 03:16:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.743 Hugepages 00:02:57.743 node hugesize free / total 00:02:57.743 node0 1048576kB 0 / 0 00:02:57.743 node0 2048kB 2048 / 2048 00:02:57.743 node1 1048576kB 0 / 0 00:02:57.743 node1 2048kB 0 / 0 00:02:57.743 00:02:57.743 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.743 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:57.743 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:57.743 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:57.743 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:57.744 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:57.744 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:57.744 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:57.744 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:57.744 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:57.744 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:57.744 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:57.744 03:16:35 -- spdk/autotest.sh@130 -- # uname -s 00:02:57.744 03:16:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:02:57.744 03:16:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:02:57.744 03:16:35 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.119 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:59.119 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:59.119 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:59.742 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.000 03:16:37 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:00.934 03:16:38 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:00.934 03:16:38 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:00.934 03:16:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:00.934 03:16:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:00.934 03:16:38 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:00.934 03:16:38 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:00.934 03:16:38 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:00.934 03:16:38 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:00.934 03:16:38 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:00.934 03:16:38 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:00.934 03:16:38 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:00.934 03:16:38 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.309 Waiting for block devices as requested 00:03:02.309 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:02.309 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:02.309 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:02.309 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:02.309 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:02.309 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:02.567 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:02.567 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:02.567 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:02.567 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:02.825 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:02.825 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:02.825 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:03.082 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:03.082 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:03.082 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:03.082 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:03.341 03:16:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:03.341 03:16:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:03:03.341 03:16:40 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:03.341 03:16:40 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:03.341 03:16:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:03.341 03:16:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:03.341 03:16:40 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:03.341 03:16:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:03.341 03:16:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:03.341 03:16:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:03.341 03:16:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:03.341 03:16:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:03.341 03:16:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:03.341 03:16:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:03.341 03:16:40 -- common/autotest_common.sh@1543 -- # continue 00:03:03.341 03:16:40 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:03.341 03:16:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:03.341 03:16:40 -- common/autotest_common.sh@10 -- # set +x 00:03:03.341 03:16:40 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:03.341 03:16:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:03.341 03:16:40 -- common/autotest_common.sh@10 -- # set +x 00:03:03.341 03:16:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.715 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:04.715 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:04.715 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:05.654 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.654 03:16:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:05.654 03:16:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:05.654 03:16:43 -- common/autotest_common.sh@10 -- # set +x 00:03:05.654 03:16:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:05.654 03:16:43 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:05.654 03:16:43 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:05.654 03:16:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:05.654 03:16:43 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:05.654 03:16:43 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:05.654 03:16:43 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:05.654 03:16:43 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:05.654 03:16:43 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:05.654 03:16:43 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:05.654 03:16:43 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:05.654 03:16:43 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:05.654 03:16:43 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:05.654 03:16:43 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:05.654 03:16:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:05.654 03:16:43 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:05.654 03:16:43 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:05.654 03:16:43 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:05.654 03:16:43 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:03:05.654 03:16:43 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:03:05.654 03:16:43 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=127411 00:03:05.654 03:16:43 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:05.654 03:16:43 -- common/autotest_common.sh@1584 -- # waitforlisten 127411 00:03:05.654 03:16:43 -- common/autotest_common.sh@817 -- # '[' -z 127411 ']' 00:03:05.654 03:16:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:05.654 03:16:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:05.654 03:16:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:05.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:05.654 03:16:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:05.654 03:16:43 -- common/autotest_common.sh@10 -- # set +x 00:03:05.912 [2024-04-19 03:16:43.249402] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:05.912 [2024-04-19 03:16:43.249495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127411 ] 00:03:05.912 EAL: No free 2048 kB hugepages reported on node 1 00:03:05.912 [2024-04-19 03:16:43.311519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:05.912 [2024-04-19 03:16:43.427482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:06.845 03:16:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:06.845 03:16:44 -- common/autotest_common.sh@850 -- # return 0 00:03:06.845 03:16:44 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:06.845 03:16:44 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:06.845 03:16:44 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:10.129 nvme0n1 00:03:10.129 03:16:47 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:10.129 [2024-04-19 03:16:47.446518] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:10.129 [2024-04-19 03:16:47.446561] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:10.129 request: 00:03:10.129 { 00:03:10.129 "nvme_ctrlr_name": "nvme0", 00:03:10.129 "password": "test", 00:03:10.129 "method": "bdev_nvme_opal_revert", 00:03:10.129 "req_id": 1 00:03:10.129 } 00:03:10.129 Got JSON-RPC error response 00:03:10.129 response: 00:03:10.129 { 00:03:10.129 "code": -32603, 00:03:10.129 "message": "Internal error" 00:03:10.129 } 00:03:10.129 03:16:47 -- common/autotest_common.sh@1590 -- # true 00:03:10.129 03:16:47 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:10.129 03:16:47 -- common/autotest_common.sh@1594 -- # killprocess 127411 00:03:10.129 03:16:47 -- common/autotest_common.sh@936 -- # '[' -z 127411 ']' 00:03:10.129 03:16:47 -- common/autotest_common.sh@940 -- # kill -0 127411 00:03:10.129 03:16:47 -- common/autotest_common.sh@941 -- # uname 00:03:10.129 03:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:10.129 03:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127411 00:03:10.129 03:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:10.129 03:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:10.129 03:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127411' 00:03:10.129 killing process with pid 127411 00:03:10.129 03:16:47 -- common/autotest_common.sh@955 -- # kill 127411 00:03:10.130 03:16:47 -- common/autotest_common.sh@960 -- # wait 127411 00:03:12.030 03:16:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:12.030 03:16:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:12.030 03:16:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:12.030 03:16:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:12.030 03:16:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:12.030 03:16:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:12.030 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.030 03:16:49 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:12.030 03:16:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.030 03:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.030 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.030 ************************************ 00:03:12.030 START TEST env 00:03:12.030 ************************************ 00:03:12.030 03:16:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:12.030 * Looking for test storage... 00:03:12.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:12.030 03:16:49 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:12.030 03:16:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.030 03:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.030 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.030 ************************************ 00:03:12.030 START TEST env_memory 00:03:12.030 ************************************ 00:03:12.030 03:16:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:12.030 00:03:12.030 00:03:12.030 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.030 http://cunit.sourceforge.net/ 00:03:12.030 00:03:12.030 00:03:12.030 Suite: memory 00:03:12.289 Test: alloc and free memory map ...[2024-04-19 03:16:49.594166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:12.289 passed 00:03:12.289 Test: mem map translation ...[2024-04-19 03:16:49.615252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:12.289 [2024-04-19 03:16:49.615274] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:12.289 [2024-04-19 03:16:49.615334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:12.289 [2024-04-19 03:16:49.615347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:12.289 passed 00:03:12.289 Test: mem map registration ...[2024-04-19 03:16:49.656271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:12.289 [2024-04-19 03:16:49.656289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:12.289 passed 00:03:12.289 Test: mem map adjacent registrations ...passed 00:03:12.289 00:03:12.289 Run Summary: Type Total Ran Passed Failed Inactive 00:03:12.289 suites 1 1 n/a 0 0 00:03:12.289 tests 4 4 4 0 0 00:03:12.289 asserts 152 152 152 0 n/a 00:03:12.289 00:03:12.289 Elapsed time = 0.144 seconds 00:03:12.289 00:03:12.289 real 0m0.151s 00:03:12.289 user 0m0.145s 00:03:12.289 sys 0m0.006s 00:03:12.289 03:16:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.289 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.289 ************************************ 00:03:12.289 END TEST env_memory 00:03:12.289 ************************************ 00:03:12.289 03:16:49 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:12.289 03:16:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.289 03:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.289 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.289 ************************************ 00:03:12.289 START TEST env_vtophys 00:03:12.289 ************************************ 00:03:12.289 03:16:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:12.289 EAL: lib.eal log level changed from notice to debug 00:03:12.289 EAL: Detected lcore 0 as core 0 on socket 0 00:03:12.289 EAL: Detected lcore 1 as core 1 on socket 0 00:03:12.289 EAL: Detected lcore 2 as core 2 on socket 0 00:03:12.289 EAL: Detected lcore 3 as core 3 on socket 0 00:03:12.289 EAL: Detected lcore 4 as core 4 on socket 0 00:03:12.289 EAL: Detected lcore 5 as core 5 on socket 0 00:03:12.289 EAL: Detected lcore 6 as core 8 on socket 0 00:03:12.289 EAL: Detected lcore 7 as core 9 on socket 0 00:03:12.289 EAL: Detected lcore 8 as core 10 on socket 0 00:03:12.289 EAL: Detected lcore 9 as core 11 on socket 0 00:03:12.289 EAL: Detected lcore 10 as core 12 on socket 0 00:03:12.289 EAL: Detected lcore 11 as core 13 on socket 0 00:03:12.289 EAL: Detected lcore 12 as core 0 on socket 1 00:03:12.289 EAL: Detected lcore 13 as core 1 on socket 1 00:03:12.289 EAL: Detected lcore 14 as core 2 on socket 1 00:03:12.289 EAL: Detected lcore 15 as core 3 on socket 1 00:03:12.289 EAL: Detected lcore 16 as core 4 on socket 1 00:03:12.289 EAL: Detected lcore 17 as core 5 on socket 1 00:03:12.289 EAL: Detected lcore 18 as core 8 on socket 1 00:03:12.289 EAL: Detected lcore 19 as core 9 on socket 1 00:03:12.289 EAL: Detected lcore 20 as core 10 on socket 1 00:03:12.289 EAL: Detected lcore 21 as core 11 on socket 1 00:03:12.289 EAL: Detected lcore 22 as core 12 on socket 1 00:03:12.289 EAL: Detected lcore 23 as core 13 on socket 1 00:03:12.289 EAL: Detected lcore 24 as core 0 on socket 0 00:03:12.289 EAL: Detected lcore 25 as core 1 on socket 0 00:03:12.289 EAL: Detected lcore 26 as core 2 on socket 0 00:03:12.289 EAL: Detected lcore 27 as core 3 on socket 0 00:03:12.289 EAL: Detected lcore 28 as core 4 on socket 0 00:03:12.289 EAL: Detected lcore 29 as core 5 on socket 0 00:03:12.289 EAL: Detected lcore 30 as core 8 on socket 0 00:03:12.289 EAL: Detected lcore 31 as core 9 on socket 0 00:03:12.289 EAL: Detected lcore 32 as core 10 on socket 0 00:03:12.289 EAL: Detected lcore 33 as core 11 on socket 0 00:03:12.289 EAL: Detected lcore 34 as core 12 on socket 0 00:03:12.289 EAL: Detected lcore 35 as core 13 on socket 0 00:03:12.289 EAL: Detected lcore 36 as core 0 on socket 1 00:03:12.289 EAL: Detected lcore 37 as core 1 on socket 1 00:03:12.289 EAL: Detected lcore 38 as core 2 on socket 1 00:03:12.289 EAL: Detected lcore 39 as core 3 on socket 1 00:03:12.289 EAL: Detected lcore 40 as core 4 on socket 1 00:03:12.289 EAL: Detected lcore 41 as core 5 on socket 1 00:03:12.289 EAL: Detected lcore 42 as core 8 on socket 1 00:03:12.289 EAL: Detected lcore 43 as core 9 on socket 1 00:03:12.289 EAL: Detected lcore 44 as core 10 on socket 1 00:03:12.289 EAL: Detected lcore 45 as core 11 on socket 1 00:03:12.289 EAL: Detected lcore 46 as core 12 on socket 1 00:03:12.289 EAL: Detected lcore 47 as core 13 on socket 1 00:03:12.289 EAL: Maximum logical cores by configuration: 128 00:03:12.289 EAL: Detected CPU lcores: 48 00:03:12.289 EAL: Detected NUMA nodes: 2 00:03:12.289 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:12.289 EAL: Detected shared linkage of DPDK 00:03:12.289 EAL: No shared files mode enabled, IPC will be disabled 00:03:12.548 EAL: Bus pci wants IOVA as 'DC' 00:03:12.548 EAL: Buses did not request a specific IOVA mode. 00:03:12.548 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:12.548 EAL: Selected IOVA mode 'VA' 00:03:12.548 EAL: No free 2048 kB hugepages reported on node 1 00:03:12.548 EAL: Probing VFIO support... 00:03:12.548 EAL: IOMMU type 1 (Type 1) is supported 00:03:12.548 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:12.548 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:12.548 EAL: VFIO support initialized 00:03:12.548 EAL: Ask a virtual area of 0x2e000 bytes 00:03:12.548 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:12.548 EAL: Setting up physically contiguous memory... 00:03:12.548 EAL: Setting maximum number of open files to 524288 00:03:12.548 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:12.548 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:12.548 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:12.548 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:12.548 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.548 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:12.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.548 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.548 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:12.548 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:12.548 EAL: Hugepages will be freed exactly as allocated. 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: TSC frequency is ~2700000 KHz 00:03:12.548 EAL: Main lcore 0 is ready (tid=7f9ba8e4ca00;cpuset=[0]) 00:03:12.548 EAL: Trying to obtain current memory policy. 00:03:12.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.548 EAL: Restoring previous memory policy: 0 00:03:12.548 EAL: request: mp_malloc_sync 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: Heap on socket 0 was expanded by 2MB 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:12.548 EAL: Mem event callback 'spdk:(nil)' registered 00:03:12.548 00:03:12.548 00:03:12.548 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.548 http://cunit.sourceforge.net/ 00:03:12.548 00:03:12.548 00:03:12.548 Suite: components_suite 00:03:12.548 Test: vtophys_malloc_test ...passed 00:03:12.548 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:12.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.548 EAL: Restoring previous memory policy: 4 00:03:12.548 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.548 EAL: request: mp_malloc_sync 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: Heap on socket 0 was expanded by 4MB 00:03:12.548 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.548 EAL: request: mp_malloc_sync 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: Heap on socket 0 was shrunk by 4MB 00:03:12.548 EAL: Trying to obtain current memory policy. 00:03:12.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.548 EAL: Restoring previous memory policy: 4 00:03:12.548 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.548 EAL: request: mp_malloc_sync 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: Heap on socket 0 was expanded by 6MB 00:03:12.548 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.548 EAL: request: mp_malloc_sync 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: Heap on socket 0 was shrunk by 6MB 00:03:12.548 EAL: Trying to obtain current memory policy. 00:03:12.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.548 EAL: Restoring previous memory policy: 4 00:03:12.548 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.548 EAL: request: mp_malloc_sync 00:03:12.548 EAL: No shared files mode enabled, IPC is disabled 00:03:12.548 EAL: Heap on socket 0 was expanded by 10MB 00:03:12.548 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was shrunk by 10MB 00:03:12.549 EAL: Trying to obtain current memory policy. 00:03:12.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.549 EAL: Restoring previous memory policy: 4 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was expanded by 18MB 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was shrunk by 18MB 00:03:12.549 EAL: Trying to obtain current memory policy. 00:03:12.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.549 EAL: Restoring previous memory policy: 4 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was expanded by 34MB 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was shrunk by 34MB 00:03:12.549 EAL: Trying to obtain current memory policy. 00:03:12.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.549 EAL: Restoring previous memory policy: 4 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was expanded by 66MB 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was shrunk by 66MB 00:03:12.549 EAL: Trying to obtain current memory policy. 00:03:12.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.549 EAL: Restoring previous memory policy: 4 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was expanded by 130MB 00:03:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.549 EAL: request: mp_malloc_sync 00:03:12.549 EAL: No shared files mode enabled, IPC is disabled 00:03:12.549 EAL: Heap on socket 0 was shrunk by 130MB 00:03:12.549 EAL: Trying to obtain current memory policy. 00:03:12.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.808 EAL: Restoring previous memory policy: 4 00:03:12.808 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.808 EAL: request: mp_malloc_sync 00:03:12.808 EAL: No shared files mode enabled, IPC is disabled 00:03:12.808 EAL: Heap on socket 0 was expanded by 258MB 00:03:12.808 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.808 EAL: request: mp_malloc_sync 00:03:12.808 EAL: No shared files mode enabled, IPC is disabled 00:03:12.808 EAL: Heap on socket 0 was shrunk by 258MB 00:03:12.808 EAL: Trying to obtain current memory policy. 00:03:12.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.066 EAL: Restoring previous memory policy: 4 00:03:13.066 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.066 EAL: request: mp_malloc_sync 00:03:13.066 EAL: No shared files mode enabled, IPC is disabled 00:03:13.066 EAL: Heap on socket 0 was expanded by 514MB 00:03:13.066 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.066 EAL: request: mp_malloc_sync 00:03:13.066 EAL: No shared files mode enabled, IPC is disabled 00:03:13.066 EAL: Heap on socket 0 was shrunk by 514MB 00:03:13.066 EAL: Trying to obtain current memory policy. 00:03:13.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.633 EAL: Restoring previous memory policy: 4 00:03:13.633 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.633 EAL: request: mp_malloc_sync 00:03:13.633 EAL: No shared files mode enabled, IPC is disabled 00:03:13.633 EAL: Heap on socket 0 was expanded by 1026MB 00:03:13.633 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.892 EAL: request: mp_malloc_sync 00:03:13.892 EAL: No shared files mode enabled, IPC is disabled 00:03:13.892 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:13.892 passed 00:03:13.892 00:03:13.892 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.892 suites 1 1 n/a 0 0 00:03:13.892 tests 2 2 2 0 0 00:03:13.892 asserts 497 497 497 0 n/a 00:03:13.892 00:03:13.892 Elapsed time = 1.397 seconds 00:03:13.892 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.892 EAL: request: mp_malloc_sync 00:03:13.892 EAL: No shared files mode enabled, IPC is disabled 00:03:13.892 EAL: Heap on socket 0 was shrunk by 2MB 00:03:13.892 EAL: No shared files mode enabled, IPC is disabled 00:03:13.892 EAL: No shared files mode enabled, IPC is disabled 00:03:13.892 EAL: No shared files mode enabled, IPC is disabled 00:03:13.892 00:03:13.892 real 0m1.514s 00:03:13.892 user 0m0.875s 00:03:13.892 sys 0m0.603s 00:03:13.892 03:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:13.892 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:03:13.892 ************************************ 00:03:13.892 END TEST env_vtophys 00:03:13.892 ************************************ 00:03:13.892 03:16:51 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:13.892 03:16:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.892 03:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.892 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:03:13.892 ************************************ 00:03:13.892 START TEST env_pci 00:03:13.892 ************************************ 00:03:13.892 03:16:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:14.150 00:03:14.150 00:03:14.150 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.150 http://cunit.sourceforge.net/ 00:03:14.150 00:03:14.150 00:03:14.150 Suite: pci 00:03:14.150 Test: pci_hook ...[2024-04-19 03:16:51.458253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 128457 has claimed it 00:03:14.150 EAL: Cannot find device (10000:00:01.0) 00:03:14.150 EAL: Failed to attach device on primary process 00:03:14.150 passed 00:03:14.151 00:03:14.151 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.151 suites 1 1 n/a 0 0 00:03:14.151 tests 1 1 1 0 0 00:03:14.151 asserts 25 25 25 0 n/a 00:03:14.151 00:03:14.151 Elapsed time = 0.021 seconds 00:03:14.151 00:03:14.151 real 0m0.033s 00:03:14.151 user 0m0.011s 00:03:14.151 sys 0m0.022s 00:03:14.151 03:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.151 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:03:14.151 ************************************ 00:03:14.151 END TEST env_pci 00:03:14.151 ************************************ 00:03:14.151 03:16:51 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:14.151 03:16:51 -- env/env.sh@15 -- # uname 00:03:14.151 03:16:51 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:14.151 03:16:51 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:14.151 03:16:51 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:14.151 03:16:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:14.151 03:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.151 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:03:14.151 ************************************ 00:03:14.151 START TEST env_dpdk_post_init 00:03:14.151 ************************************ 00:03:14.151 03:16:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:14.151 EAL: Detected CPU lcores: 48 00:03:14.151 EAL: Detected NUMA nodes: 2 00:03:14.151 EAL: Detected shared linkage of DPDK 00:03:14.151 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:14.151 EAL: Selected IOVA mode 'VA' 00:03:14.151 EAL: No free 2048 kB hugepages reported on node 1 00:03:14.151 EAL: VFIO support initialized 00:03:14.151 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:14.410 EAL: Using IOMMU type 1 (Type 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:14.410 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:15.347 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:18.714 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:18.714 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:18.714 Starting DPDK initialization... 00:03:18.714 Starting SPDK post initialization... 00:03:18.714 SPDK NVMe probe 00:03:18.714 Attaching to 0000:88:00.0 00:03:18.714 Attached to 0000:88:00.0 00:03:18.714 Cleaning up... 00:03:18.714 00:03:18.714 real 0m4.388s 00:03:18.714 user 0m3.262s 00:03:18.714 sys 0m0.185s 00:03:18.714 03:16:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.714 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.714 ************************************ 00:03:18.714 END TEST env_dpdk_post_init 00:03:18.714 ************************************ 00:03:18.714 03:16:56 -- env/env.sh@26 -- # uname 00:03:18.714 03:16:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:18.714 03:16:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:18.714 03:16:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.714 03:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.714 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.714 ************************************ 00:03:18.714 START TEST env_mem_callbacks 00:03:18.714 ************************************ 00:03:18.714 03:16:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:18.714 EAL: Detected CPU lcores: 48 00:03:18.714 EAL: Detected NUMA nodes: 2 00:03:18.714 EAL: Detected shared linkage of DPDK 00:03:18.714 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:18.714 EAL: Selected IOVA mode 'VA' 00:03:18.714 EAL: No free 2048 kB hugepages reported on node 1 00:03:18.714 EAL: VFIO support initialized 00:03:18.715 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:18.715 00:03:18.715 00:03:18.715 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.715 http://cunit.sourceforge.net/ 00:03:18.715 00:03:18.715 00:03:18.715 Suite: memory 00:03:18.715 Test: test ... 00:03:18.715 register 0x200000200000 2097152 00:03:18.715 malloc 3145728 00:03:18.715 register 0x200000400000 4194304 00:03:18.715 buf 0x200000500000 len 3145728 PASSED 00:03:18.715 malloc 64 00:03:18.715 buf 0x2000004fff40 len 64 PASSED 00:03:18.715 malloc 4194304 00:03:18.715 register 0x200000800000 6291456 00:03:18.715 buf 0x200000a00000 len 4194304 PASSED 00:03:18.715 free 0x200000500000 3145728 00:03:18.715 free 0x2000004fff40 64 00:03:18.715 unregister 0x200000400000 4194304 PASSED 00:03:18.715 free 0x200000a00000 4194304 00:03:18.715 unregister 0x200000800000 6291456 PASSED 00:03:18.715 malloc 8388608 00:03:18.715 register 0x200000400000 10485760 00:03:18.715 buf 0x200000600000 len 8388608 PASSED 00:03:18.715 free 0x200000600000 8388608 00:03:18.715 unregister 0x200000400000 10485760 PASSED 00:03:18.715 passed 00:03:18.715 00:03:18.715 Run Summary: Type Total Ran Passed Failed Inactive 00:03:18.715 suites 1 1 n/a 0 0 00:03:18.715 tests 1 1 1 0 0 00:03:18.715 asserts 15 15 15 0 n/a 00:03:18.715 00:03:18.715 Elapsed time = 0.005 seconds 00:03:18.715 00:03:18.715 real 0m0.049s 00:03:18.715 user 0m0.013s 00:03:18.715 sys 0m0.035s 00:03:18.715 03:16:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.715 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.715 ************************************ 00:03:18.715 END TEST env_mem_callbacks 00:03:18.715 ************************************ 00:03:18.715 00:03:18.715 real 0m6.774s 00:03:18.715 user 0m4.537s 00:03:18.715 sys 0m1.219s 00:03:18.715 03:16:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.715 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.715 ************************************ 00:03:18.715 END TEST env 00:03:18.715 ************************************ 00:03:18.715 03:16:56 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:18.715 03:16:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.715 03:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.715 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.974 ************************************ 00:03:18.974 START TEST rpc 00:03:18.974 ************************************ 00:03:18.974 03:16:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:18.974 * Looking for test storage... 00:03:18.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:18.974 03:16:56 -- rpc/rpc.sh@65 -- # spdk_pid=129136 00:03:18.974 03:16:56 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:18.974 03:16:56 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:18.974 03:16:56 -- rpc/rpc.sh@67 -- # waitforlisten 129136 00:03:18.974 03:16:56 -- common/autotest_common.sh@817 -- # '[' -z 129136 ']' 00:03:18.974 03:16:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:18.974 03:16:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:18.974 03:16:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:18.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:18.974 03:16:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:18.974 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.974 [2024-04-19 03:16:56.404407] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:18.974 [2024-04-19 03:16:56.404500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129136 ] 00:03:18.974 EAL: No free 2048 kB hugepages reported on node 1 00:03:18.974 [2024-04-19 03:16:56.462270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:19.264 [2024-04-19 03:16:56.577310] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:19.264 [2024-04-19 03:16:56.577378] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 129136' to capture a snapshot of events at runtime. 00:03:19.264 [2024-04-19 03:16:56.577406] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:19.264 [2024-04-19 03:16:56.577420] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:19.264 [2024-04-19 03:16:56.577447] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid129136 for offline analysis/debug. 00:03:19.264 [2024-04-19 03:16:56.577477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:19.523 03:16:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:19.523 03:16:56 -- common/autotest_common.sh@850 -- # return 0 00:03:19.523 03:16:56 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:19.523 03:16:56 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:19.523 03:16:56 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:19.523 03:16:56 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:19.523 03:16:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.523 03:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.523 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:19.523 ************************************ 00:03:19.523 START TEST rpc_integrity 00:03:19.523 ************************************ 00:03:19.523 03:16:56 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:19.523 03:16:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:19.523 03:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.523 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:03:19.523 03:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.523 03:16:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:19.523 03:16:56 -- rpc/rpc.sh@13 -- # jq length 00:03:19.523 03:16:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:19.523 03:16:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:19.523 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.523 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.523 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.523 03:16:57 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:19.523 03:16:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:19.523 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.523 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.523 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.523 03:16:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:19.523 { 00:03:19.523 "name": "Malloc0", 00:03:19.523 "aliases": [ 00:03:19.523 "098f371b-f6f2-4d72-a549-dffde01ac411" 00:03:19.524 ], 00:03:19.524 "product_name": "Malloc disk", 00:03:19.524 "block_size": 512, 00:03:19.524 "num_blocks": 16384, 00:03:19.524 "uuid": "098f371b-f6f2-4d72-a549-dffde01ac411", 00:03:19.524 "assigned_rate_limits": { 00:03:19.524 "rw_ios_per_sec": 0, 00:03:19.524 "rw_mbytes_per_sec": 0, 00:03:19.524 "r_mbytes_per_sec": 0, 00:03:19.524 "w_mbytes_per_sec": 0 00:03:19.524 }, 00:03:19.524 "claimed": false, 00:03:19.524 "zoned": false, 00:03:19.524 "supported_io_types": { 00:03:19.524 "read": true, 00:03:19.524 "write": true, 00:03:19.524 "unmap": true, 00:03:19.524 "write_zeroes": true, 00:03:19.524 "flush": true, 00:03:19.524 "reset": true, 00:03:19.524 "compare": false, 00:03:19.524 "compare_and_write": false, 00:03:19.524 "abort": true, 00:03:19.524 "nvme_admin": false, 00:03:19.524 "nvme_io": false 00:03:19.524 }, 00:03:19.524 "memory_domains": [ 00:03:19.524 { 00:03:19.524 "dma_device_id": "system", 00:03:19.524 "dma_device_type": 1 00:03:19.524 }, 00:03:19.524 { 00:03:19.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.524 "dma_device_type": 2 00:03:19.524 } 00:03:19.524 ], 00:03:19.524 "driver_specific": {} 00:03:19.524 } 00:03:19.524 ]' 00:03:19.524 03:16:57 -- rpc/rpc.sh@17 -- # jq length 00:03:19.524 03:16:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:19.524 03:16:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:19.524 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.524 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.524 [2024-04-19 03:16:57.070699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:19.524 [2024-04-19 03:16:57.070746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:19.524 [2024-04-19 03:16:57.070769] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x76bd50 00:03:19.524 [2024-04-19 03:16:57.070784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:19.524 [2024-04-19 03:16:57.072235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:19.524 [2024-04-19 03:16:57.072264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:19.524 Passthru0 00:03:19.524 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.524 03:16:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:19.524 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.524 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.782 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.782 03:16:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:19.782 { 00:03:19.782 "name": "Malloc0", 00:03:19.782 "aliases": [ 00:03:19.782 "098f371b-f6f2-4d72-a549-dffde01ac411" 00:03:19.782 ], 00:03:19.782 "product_name": "Malloc disk", 00:03:19.782 "block_size": 512, 00:03:19.782 "num_blocks": 16384, 00:03:19.782 "uuid": "098f371b-f6f2-4d72-a549-dffde01ac411", 00:03:19.782 "assigned_rate_limits": { 00:03:19.782 "rw_ios_per_sec": 0, 00:03:19.782 "rw_mbytes_per_sec": 0, 00:03:19.782 "r_mbytes_per_sec": 0, 00:03:19.782 "w_mbytes_per_sec": 0 00:03:19.782 }, 00:03:19.782 "claimed": true, 00:03:19.782 "claim_type": "exclusive_write", 00:03:19.782 "zoned": false, 00:03:19.782 "supported_io_types": { 00:03:19.782 "read": true, 00:03:19.782 "write": true, 00:03:19.782 "unmap": true, 00:03:19.782 "write_zeroes": true, 00:03:19.783 "flush": true, 00:03:19.783 "reset": true, 00:03:19.783 "compare": false, 00:03:19.783 "compare_and_write": false, 00:03:19.783 "abort": true, 00:03:19.783 "nvme_admin": false, 00:03:19.783 "nvme_io": false 00:03:19.783 }, 00:03:19.783 "memory_domains": [ 00:03:19.783 { 00:03:19.783 "dma_device_id": "system", 00:03:19.783 "dma_device_type": 1 00:03:19.783 }, 00:03:19.783 { 00:03:19.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.783 "dma_device_type": 2 00:03:19.783 } 00:03:19.783 ], 00:03:19.783 "driver_specific": {} 00:03:19.783 }, 00:03:19.783 { 00:03:19.783 "name": "Passthru0", 00:03:19.783 "aliases": [ 00:03:19.783 "345619d4-9e83-525e-83d5-5af25cee700a" 00:03:19.783 ], 00:03:19.783 "product_name": "passthru", 00:03:19.783 "block_size": 512, 00:03:19.783 "num_blocks": 16384, 00:03:19.783 "uuid": "345619d4-9e83-525e-83d5-5af25cee700a", 00:03:19.783 "assigned_rate_limits": { 00:03:19.783 "rw_ios_per_sec": 0, 00:03:19.783 "rw_mbytes_per_sec": 0, 00:03:19.783 "r_mbytes_per_sec": 0, 00:03:19.783 "w_mbytes_per_sec": 0 00:03:19.783 }, 00:03:19.783 "claimed": false, 00:03:19.783 "zoned": false, 00:03:19.783 "supported_io_types": { 00:03:19.783 "read": true, 00:03:19.783 "write": true, 00:03:19.783 "unmap": true, 00:03:19.783 "write_zeroes": true, 00:03:19.783 "flush": true, 00:03:19.783 "reset": true, 00:03:19.783 "compare": false, 00:03:19.783 "compare_and_write": false, 00:03:19.783 "abort": true, 00:03:19.783 "nvme_admin": false, 00:03:19.783 "nvme_io": false 00:03:19.783 }, 00:03:19.783 "memory_domains": [ 00:03:19.783 { 00:03:19.783 "dma_device_id": "system", 00:03:19.783 "dma_device_type": 1 00:03:19.783 }, 00:03:19.783 { 00:03:19.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.783 "dma_device_type": 2 00:03:19.783 } 00:03:19.783 ], 00:03:19.783 "driver_specific": { 00:03:19.783 "passthru": { 00:03:19.783 "name": "Passthru0", 00:03:19.783 "base_bdev_name": "Malloc0" 00:03:19.783 } 00:03:19.783 } 00:03:19.783 } 00:03:19.783 ]' 00:03:19.783 03:16:57 -- rpc/rpc.sh@21 -- # jq length 00:03:19.783 03:16:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:19.783 03:16:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:19.783 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.783 03:16:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:19.783 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.783 03:16:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:19.783 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.783 03:16:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:19.783 03:16:57 -- rpc/rpc.sh@26 -- # jq length 00:03:19.783 03:16:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:19.783 00:03:19.783 real 0m0.232s 00:03:19.783 user 0m0.145s 00:03:19.783 sys 0m0.027s 00:03:19.783 03:16:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 ************************************ 00:03:19.783 END TEST rpc_integrity 00:03:19.783 ************************************ 00:03:19.783 03:16:57 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:19.783 03:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.783 03:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 ************************************ 00:03:19.783 START TEST rpc_plugins 00:03:19.783 ************************************ 00:03:19.783 03:16:57 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:19.783 03:16:57 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:19.783 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.783 03:16:57 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:19.783 03:16:57 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:19.783 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:19.783 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:19.783 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:19.783 03:16:57 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:19.783 { 00:03:19.783 "name": "Malloc1", 00:03:19.783 "aliases": [ 00:03:19.783 "3f101c87-093f-41b5-8185-c94d4310ef06" 00:03:19.783 ], 00:03:19.783 "product_name": "Malloc disk", 00:03:19.783 "block_size": 4096, 00:03:19.783 "num_blocks": 256, 00:03:19.783 "uuid": "3f101c87-093f-41b5-8185-c94d4310ef06", 00:03:19.783 "assigned_rate_limits": { 00:03:19.783 "rw_ios_per_sec": 0, 00:03:19.783 "rw_mbytes_per_sec": 0, 00:03:19.783 "r_mbytes_per_sec": 0, 00:03:19.783 "w_mbytes_per_sec": 0 00:03:19.783 }, 00:03:19.783 "claimed": false, 00:03:19.783 "zoned": false, 00:03:19.783 "supported_io_types": { 00:03:19.783 "read": true, 00:03:19.783 "write": true, 00:03:19.783 "unmap": true, 00:03:19.783 "write_zeroes": true, 00:03:19.783 "flush": true, 00:03:19.783 "reset": true, 00:03:19.783 "compare": false, 00:03:19.783 "compare_and_write": false, 00:03:19.783 "abort": true, 00:03:19.783 "nvme_admin": false, 00:03:19.783 "nvme_io": false 00:03:19.783 }, 00:03:19.783 "memory_domains": [ 00:03:19.783 { 00:03:19.783 "dma_device_id": "system", 00:03:19.783 "dma_device_type": 1 00:03:19.783 }, 00:03:19.783 { 00:03:19.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.783 "dma_device_type": 2 00:03:19.783 } 00:03:19.783 ], 00:03:19.783 "driver_specific": {} 00:03:19.783 } 00:03:19.783 ]' 00:03:19.783 03:16:57 -- rpc/rpc.sh@32 -- # jq length 00:03:20.041 03:16:57 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:20.041 03:16:57 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:20.041 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.041 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.041 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.041 03:16:57 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:20.041 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.041 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.041 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.041 03:16:57 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:20.041 03:16:57 -- rpc/rpc.sh@36 -- # jq length 00:03:20.041 03:16:57 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:20.041 00:03:20.041 real 0m0.110s 00:03:20.041 user 0m0.071s 00:03:20.041 sys 0m0.010s 00:03:20.041 03:16:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.041 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.041 ************************************ 00:03:20.041 END TEST rpc_plugins 00:03:20.041 ************************************ 00:03:20.041 03:16:57 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:20.041 03:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.041 03:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.041 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.041 ************************************ 00:03:20.041 START TEST rpc_trace_cmd_test 00:03:20.041 ************************************ 00:03:20.041 03:16:57 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:20.041 03:16:57 -- rpc/rpc.sh@40 -- # local info 00:03:20.041 03:16:57 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:20.041 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.041 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.041 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.041 03:16:57 -- rpc/rpc.sh@42 -- # info='{ 00:03:20.041 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid129136", 00:03:20.041 "tpoint_group_mask": "0x8", 00:03:20.041 "iscsi_conn": { 00:03:20.041 "mask": "0x2", 00:03:20.041 "tpoint_mask": "0x0" 00:03:20.041 }, 00:03:20.041 "scsi": { 00:03:20.041 "mask": "0x4", 00:03:20.041 "tpoint_mask": "0x0" 00:03:20.041 }, 00:03:20.042 "bdev": { 00:03:20.042 "mask": "0x8", 00:03:20.042 "tpoint_mask": "0xffffffffffffffff" 00:03:20.042 }, 00:03:20.042 "nvmf_rdma": { 00:03:20.042 "mask": "0x10", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "nvmf_tcp": { 00:03:20.042 "mask": "0x20", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "ftl": { 00:03:20.042 "mask": "0x40", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "blobfs": { 00:03:20.042 "mask": "0x80", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "dsa": { 00:03:20.042 "mask": "0x200", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "thread": { 00:03:20.042 "mask": "0x400", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "nvme_pcie": { 00:03:20.042 "mask": "0x800", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "iaa": { 00:03:20.042 "mask": "0x1000", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "nvme_tcp": { 00:03:20.042 "mask": "0x2000", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "bdev_nvme": { 00:03:20.042 "mask": "0x4000", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 }, 00:03:20.042 "sock": { 00:03:20.042 "mask": "0x8000", 00:03:20.042 "tpoint_mask": "0x0" 00:03:20.042 } 00:03:20.042 }' 00:03:20.042 03:16:57 -- rpc/rpc.sh@43 -- # jq length 00:03:20.042 03:16:57 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:20.042 03:16:57 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:20.300 03:16:57 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:20.300 03:16:57 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:20.300 03:16:57 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:20.300 03:16:57 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:20.300 03:16:57 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:20.300 03:16:57 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:20.300 03:16:57 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:20.300 00:03:20.300 real 0m0.193s 00:03:20.300 user 0m0.169s 00:03:20.300 sys 0m0.015s 00:03:20.300 03:16:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.300 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.300 ************************************ 00:03:20.300 END TEST rpc_trace_cmd_test 00:03:20.300 ************************************ 00:03:20.300 03:16:57 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:20.300 03:16:57 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:20.300 03:16:57 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:20.300 03:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.300 03:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.300 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.300 ************************************ 00:03:20.300 START TEST rpc_daemon_integrity 00:03:20.300 ************************************ 00:03:20.300 03:16:57 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:20.300 03:16:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:20.300 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.300 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.300 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.300 03:16:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:20.558 03:16:57 -- rpc/rpc.sh@13 -- # jq length 00:03:20.558 03:16:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:20.558 03:16:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:20.558 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.558 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.558 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.558 03:16:57 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:20.558 03:16:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:20.558 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.558 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.558 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.558 03:16:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:20.558 { 00:03:20.558 "name": "Malloc2", 00:03:20.558 "aliases": [ 00:03:20.558 "485080a2-80b6-4cea-84e6-2e0a0aa62e58" 00:03:20.558 ], 00:03:20.558 "product_name": "Malloc disk", 00:03:20.558 "block_size": 512, 00:03:20.558 "num_blocks": 16384, 00:03:20.558 "uuid": "485080a2-80b6-4cea-84e6-2e0a0aa62e58", 00:03:20.558 "assigned_rate_limits": { 00:03:20.558 "rw_ios_per_sec": 0, 00:03:20.559 "rw_mbytes_per_sec": 0, 00:03:20.559 "r_mbytes_per_sec": 0, 00:03:20.559 "w_mbytes_per_sec": 0 00:03:20.559 }, 00:03:20.559 "claimed": false, 00:03:20.559 "zoned": false, 00:03:20.559 "supported_io_types": { 00:03:20.559 "read": true, 00:03:20.559 "write": true, 00:03:20.559 "unmap": true, 00:03:20.559 "write_zeroes": true, 00:03:20.559 "flush": true, 00:03:20.559 "reset": true, 00:03:20.559 "compare": false, 00:03:20.559 "compare_and_write": false, 00:03:20.559 "abort": true, 00:03:20.559 "nvme_admin": false, 00:03:20.559 "nvme_io": false 00:03:20.559 }, 00:03:20.559 "memory_domains": [ 00:03:20.559 { 00:03:20.559 "dma_device_id": "system", 00:03:20.559 "dma_device_type": 1 00:03:20.559 }, 00:03:20.559 { 00:03:20.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:20.559 "dma_device_type": 2 00:03:20.559 } 00:03:20.559 ], 00:03:20.559 "driver_specific": {} 00:03:20.559 } 00:03:20.559 ]' 00:03:20.559 03:16:57 -- rpc/rpc.sh@17 -- # jq length 00:03:20.559 03:16:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:20.559 03:16:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:20.559 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.559 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.559 [2024-04-19 03:16:57.957806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:20.559 [2024-04-19 03:16:57.957853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:20.559 [2024-04-19 03:16:57.957878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x76f6a0 00:03:20.559 [2024-04-19 03:16:57.957894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:20.559 [2024-04-19 03:16:57.959234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:20.559 [2024-04-19 03:16:57.959264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:20.559 Passthru0 00:03:20.559 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.559 03:16:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:20.559 03:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.559 03:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.559 03:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.559 03:16:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:20.559 { 00:03:20.559 "name": "Malloc2", 00:03:20.559 "aliases": [ 00:03:20.559 "485080a2-80b6-4cea-84e6-2e0a0aa62e58" 00:03:20.559 ], 00:03:20.559 "product_name": "Malloc disk", 00:03:20.559 "block_size": 512, 00:03:20.559 "num_blocks": 16384, 00:03:20.559 "uuid": "485080a2-80b6-4cea-84e6-2e0a0aa62e58", 00:03:20.559 "assigned_rate_limits": { 00:03:20.559 "rw_ios_per_sec": 0, 00:03:20.559 "rw_mbytes_per_sec": 0, 00:03:20.559 "r_mbytes_per_sec": 0, 00:03:20.559 "w_mbytes_per_sec": 0 00:03:20.559 }, 00:03:20.559 "claimed": true, 00:03:20.559 "claim_type": "exclusive_write", 00:03:20.559 "zoned": false, 00:03:20.559 "supported_io_types": { 00:03:20.559 "read": true, 00:03:20.559 "write": true, 00:03:20.559 "unmap": true, 00:03:20.559 "write_zeroes": true, 00:03:20.559 "flush": true, 00:03:20.559 "reset": true, 00:03:20.559 "compare": false, 00:03:20.559 "compare_and_write": false, 00:03:20.559 "abort": true, 00:03:20.559 "nvme_admin": false, 00:03:20.559 "nvme_io": false 00:03:20.559 }, 00:03:20.559 "memory_domains": [ 00:03:20.559 { 00:03:20.559 "dma_device_id": "system", 00:03:20.559 "dma_device_type": 1 00:03:20.559 }, 00:03:20.559 { 00:03:20.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:20.559 "dma_device_type": 2 00:03:20.559 } 00:03:20.559 ], 00:03:20.559 "driver_specific": {} 00:03:20.559 }, 00:03:20.559 { 00:03:20.559 "name": "Passthru0", 00:03:20.559 "aliases": [ 00:03:20.559 "d884f22e-a548-5216-bfac-40cc8dac0849" 00:03:20.559 ], 00:03:20.559 "product_name": "passthru", 00:03:20.559 "block_size": 512, 00:03:20.559 "num_blocks": 16384, 00:03:20.559 "uuid": "d884f22e-a548-5216-bfac-40cc8dac0849", 00:03:20.559 "assigned_rate_limits": { 00:03:20.559 "rw_ios_per_sec": 0, 00:03:20.559 "rw_mbytes_per_sec": 0, 00:03:20.559 "r_mbytes_per_sec": 0, 00:03:20.559 "w_mbytes_per_sec": 0 00:03:20.559 }, 00:03:20.559 "claimed": false, 00:03:20.559 "zoned": false, 00:03:20.559 "supported_io_types": { 00:03:20.559 "read": true, 00:03:20.559 "write": true, 00:03:20.559 "unmap": true, 00:03:20.559 "write_zeroes": true, 00:03:20.559 "flush": true, 00:03:20.559 "reset": true, 00:03:20.559 "compare": false, 00:03:20.559 "compare_and_write": false, 00:03:20.559 "abort": true, 00:03:20.559 "nvme_admin": false, 00:03:20.559 "nvme_io": false 00:03:20.559 }, 00:03:20.559 "memory_domains": [ 00:03:20.559 { 00:03:20.559 "dma_device_id": "system", 00:03:20.559 "dma_device_type": 1 00:03:20.559 }, 00:03:20.559 { 00:03:20.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:20.559 "dma_device_type": 2 00:03:20.559 } 00:03:20.559 ], 00:03:20.559 "driver_specific": { 00:03:20.559 "passthru": { 00:03:20.559 "name": "Passthru0", 00:03:20.559 "base_bdev_name": "Malloc2" 00:03:20.559 } 00:03:20.559 } 00:03:20.559 } 00:03:20.559 ]' 00:03:20.559 03:16:57 -- rpc/rpc.sh@21 -- # jq length 00:03:20.559 03:16:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:20.559 03:16:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:20.559 03:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.559 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:20.559 03:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.559 03:16:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:20.559 03:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.559 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:20.559 03:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.559 03:16:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:20.559 03:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:20.559 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:20.559 03:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:20.559 03:16:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:20.559 03:16:58 -- rpc/rpc.sh@26 -- # jq length 00:03:20.559 03:16:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:20.559 00:03:20.559 real 0m0.230s 00:03:20.559 user 0m0.147s 00:03:20.559 sys 0m0.023s 00:03:20.559 03:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.559 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:20.559 ************************************ 00:03:20.559 END TEST rpc_daemon_integrity 00:03:20.559 ************************************ 00:03:20.559 03:16:58 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:20.559 03:16:58 -- rpc/rpc.sh@84 -- # killprocess 129136 00:03:20.559 03:16:58 -- common/autotest_common.sh@936 -- # '[' -z 129136 ']' 00:03:20.559 03:16:58 -- common/autotest_common.sh@940 -- # kill -0 129136 00:03:20.559 03:16:58 -- common/autotest_common.sh@941 -- # uname 00:03:20.559 03:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:20.559 03:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129136 00:03:20.818 03:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:20.818 03:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:20.818 03:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129136' 00:03:20.818 killing process with pid 129136 00:03:20.818 03:16:58 -- common/autotest_common.sh@955 -- # kill 129136 00:03:20.818 03:16:58 -- common/autotest_common.sh@960 -- # wait 129136 00:03:21.077 00:03:21.077 real 0m2.298s 00:03:21.077 user 0m2.864s 00:03:21.077 sys 0m0.750s 00:03:21.077 03:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.077 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:21.077 ************************************ 00:03:21.077 END TEST rpc 00:03:21.077 ************************************ 00:03:21.077 03:16:58 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:21.077 03:16:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.077 03:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.077 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:21.336 ************************************ 00:03:21.336 START TEST skip_rpc 00:03:21.336 ************************************ 00:03:21.336 03:16:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:21.336 * Looking for test storage... 00:03:21.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:21.336 03:16:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.336 03:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.336 03:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:21.336 ************************************ 00:03:21.336 START TEST skip_rpc 00:03:21.336 ************************************ 00:03:21.336 03:16:58 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=129734 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:21.336 03:16:58 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:21.594 [2024-04-19 03:16:58.912762] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:21.594 [2024-04-19 03:16:58.912841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129734 ] 00:03:21.594 EAL: No free 2048 kB hugepages reported on node 1 00:03:21.595 [2024-04-19 03:16:58.969598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.595 [2024-04-19 03:16:59.086528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.857 03:17:03 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:26.857 03:17:03 -- common/autotest_common.sh@638 -- # local es=0 00:03:26.857 03:17:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:26.857 03:17:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:26.857 03:17:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:26.857 03:17:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:26.857 03:17:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:26.857 03:17:03 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:26.857 03:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:26.857 03:17:03 -- common/autotest_common.sh@10 -- # set +x 00:03:26.857 03:17:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:26.857 03:17:03 -- common/autotest_common.sh@641 -- # es=1 00:03:26.857 03:17:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:26.857 03:17:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:26.857 03:17:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:26.857 03:17:03 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:26.857 03:17:03 -- rpc/skip_rpc.sh@23 -- # killprocess 129734 00:03:26.857 03:17:03 -- common/autotest_common.sh@936 -- # '[' -z 129734 ']' 00:03:26.857 03:17:03 -- common/autotest_common.sh@940 -- # kill -0 129734 00:03:26.857 03:17:03 -- common/autotest_common.sh@941 -- # uname 00:03:26.857 03:17:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:26.857 03:17:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129734 00:03:26.857 03:17:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:26.857 03:17:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:26.857 03:17:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129734' 00:03:26.857 killing process with pid 129734 00:03:26.858 03:17:03 -- common/autotest_common.sh@955 -- # kill 129734 00:03:26.858 03:17:03 -- common/autotest_common.sh@960 -- # wait 129734 00:03:26.858 00:03:26.858 real 0m5.493s 00:03:26.858 user 0m5.162s 00:03:26.858 sys 0m0.328s 00:03:26.858 03:17:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.858 03:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:26.858 ************************************ 00:03:26.858 END TEST skip_rpc 00:03:26.858 ************************************ 00:03:26.858 03:17:04 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:26.858 03:17:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.858 03:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.858 03:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.116 ************************************ 00:03:27.116 START TEST skip_rpc_with_json 00:03:27.116 ************************************ 00:03:27.116 03:17:04 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:27.116 03:17:04 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:27.116 03:17:04 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=130362 00:03:27.116 03:17:04 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:27.116 03:17:04 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:27.116 03:17:04 -- rpc/skip_rpc.sh@31 -- # waitforlisten 130362 00:03:27.116 03:17:04 -- common/autotest_common.sh@817 -- # '[' -z 130362 ']' 00:03:27.116 03:17:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.116 03:17:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:27.116 03:17:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.116 03:17:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:27.116 03:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.116 [2024-04-19 03:17:04.522699] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:27.116 [2024-04-19 03:17:04.522781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130362 ] 00:03:27.116 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.116 [2024-04-19 03:17:04.579534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.375 [2024-04-19 03:17:04.690332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.633 03:17:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:27.633 03:17:04 -- common/autotest_common.sh@850 -- # return 0 00:03:27.633 03:17:04 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:27.633 03:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:27.633 03:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.633 [2024-04-19 03:17:04.956573] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:27.633 request: 00:03:27.633 { 00:03:27.633 "trtype": "tcp", 00:03:27.633 "method": "nvmf_get_transports", 00:03:27.633 "req_id": 1 00:03:27.633 } 00:03:27.633 Got JSON-RPC error response 00:03:27.633 response: 00:03:27.633 { 00:03:27.633 "code": -19, 00:03:27.633 "message": "No such device" 00:03:27.633 } 00:03:27.633 03:17:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:27.633 03:17:04 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:27.633 03:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:27.633 03:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.633 [2024-04-19 03:17:04.964703] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:27.633 03:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:27.633 03:17:04 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:27.633 03:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:27.633 03:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.633 03:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:27.633 03:17:05 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:27.633 { 00:03:27.633 "subsystems": [ 00:03:27.633 { 00:03:27.633 "subsystem": "vfio_user_target", 00:03:27.633 "config": null 00:03:27.633 }, 00:03:27.633 { 00:03:27.633 "subsystem": "keyring", 00:03:27.633 "config": [] 00:03:27.633 }, 00:03:27.633 { 00:03:27.633 "subsystem": "iobuf", 00:03:27.633 "config": [ 00:03:27.633 { 00:03:27.633 "method": "iobuf_set_options", 00:03:27.633 "params": { 00:03:27.633 "small_pool_count": 8192, 00:03:27.633 "large_pool_count": 1024, 00:03:27.633 "small_bufsize": 8192, 00:03:27.633 "large_bufsize": 135168 00:03:27.633 } 00:03:27.633 } 00:03:27.633 ] 00:03:27.633 }, 00:03:27.633 { 00:03:27.633 "subsystem": "sock", 00:03:27.633 "config": [ 00:03:27.633 { 00:03:27.633 "method": "sock_impl_set_options", 00:03:27.633 "params": { 00:03:27.633 "impl_name": "posix", 00:03:27.633 "recv_buf_size": 2097152, 00:03:27.633 "send_buf_size": 2097152, 00:03:27.633 "enable_recv_pipe": true, 00:03:27.633 "enable_quickack": false, 00:03:27.633 "enable_placement_id": 0, 00:03:27.633 "enable_zerocopy_send_server": true, 00:03:27.633 "enable_zerocopy_send_client": false, 00:03:27.633 "zerocopy_threshold": 0, 00:03:27.633 "tls_version": 0, 00:03:27.634 "enable_ktls": false 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "sock_impl_set_options", 00:03:27.634 "params": { 00:03:27.634 "impl_name": "ssl", 00:03:27.634 "recv_buf_size": 4096, 00:03:27.634 "send_buf_size": 4096, 00:03:27.634 "enable_recv_pipe": true, 00:03:27.634 "enable_quickack": false, 00:03:27.634 "enable_placement_id": 0, 00:03:27.634 "enable_zerocopy_send_server": true, 00:03:27.634 "enable_zerocopy_send_client": false, 00:03:27.634 "zerocopy_threshold": 0, 00:03:27.634 "tls_version": 0, 00:03:27.634 "enable_ktls": false 00:03:27.634 } 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "vmd", 00:03:27.634 "config": [] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "accel", 00:03:27.634 "config": [ 00:03:27.634 { 00:03:27.634 "method": "accel_set_options", 00:03:27.634 "params": { 00:03:27.634 "small_cache_size": 128, 00:03:27.634 "large_cache_size": 16, 00:03:27.634 "task_count": 2048, 00:03:27.634 "sequence_count": 2048, 00:03:27.634 "buf_count": 2048 00:03:27.634 } 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "bdev", 00:03:27.634 "config": [ 00:03:27.634 { 00:03:27.634 "method": "bdev_set_options", 00:03:27.634 "params": { 00:03:27.634 "bdev_io_pool_size": 65535, 00:03:27.634 "bdev_io_cache_size": 256, 00:03:27.634 "bdev_auto_examine": true, 00:03:27.634 "iobuf_small_cache_size": 128, 00:03:27.634 "iobuf_large_cache_size": 16 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "bdev_raid_set_options", 00:03:27.634 "params": { 00:03:27.634 "process_window_size_kb": 1024 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "bdev_iscsi_set_options", 00:03:27.634 "params": { 00:03:27.634 "timeout_sec": 30 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "bdev_nvme_set_options", 00:03:27.634 "params": { 00:03:27.634 "action_on_timeout": "none", 00:03:27.634 "timeout_us": 0, 00:03:27.634 "timeout_admin_us": 0, 00:03:27.634 "keep_alive_timeout_ms": 10000, 00:03:27.634 "arbitration_burst": 0, 00:03:27.634 "low_priority_weight": 0, 00:03:27.634 "medium_priority_weight": 0, 00:03:27.634 "high_priority_weight": 0, 00:03:27.634 "nvme_adminq_poll_period_us": 10000, 00:03:27.634 "nvme_ioq_poll_period_us": 0, 00:03:27.634 "io_queue_requests": 0, 00:03:27.634 "delay_cmd_submit": true, 00:03:27.634 "transport_retry_count": 4, 00:03:27.634 "bdev_retry_count": 3, 00:03:27.634 "transport_ack_timeout": 0, 00:03:27.634 "ctrlr_loss_timeout_sec": 0, 00:03:27.634 "reconnect_delay_sec": 0, 00:03:27.634 "fast_io_fail_timeout_sec": 0, 00:03:27.634 "disable_auto_failback": false, 00:03:27.634 "generate_uuids": false, 00:03:27.634 "transport_tos": 0, 00:03:27.634 "nvme_error_stat": false, 00:03:27.634 "rdma_srq_size": 0, 00:03:27.634 "io_path_stat": false, 00:03:27.634 "allow_accel_sequence": false, 00:03:27.634 "rdma_max_cq_size": 0, 00:03:27.634 "rdma_cm_event_timeout_ms": 0, 00:03:27.634 "dhchap_digests": [ 00:03:27.634 "sha256", 00:03:27.634 "sha384", 00:03:27.634 "sha512" 00:03:27.634 ], 00:03:27.634 "dhchap_dhgroups": [ 00:03:27.634 "null", 00:03:27.634 "ffdhe2048", 00:03:27.634 "ffdhe3072", 00:03:27.634 "ffdhe4096", 00:03:27.634 "ffdhe6144", 00:03:27.634 "ffdhe8192" 00:03:27.634 ] 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "bdev_nvme_set_hotplug", 00:03:27.634 "params": { 00:03:27.634 "period_us": 100000, 00:03:27.634 "enable": false 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "bdev_wait_for_examine" 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "scsi", 00:03:27.634 "config": null 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "scheduler", 00:03:27.634 "config": [ 00:03:27.634 { 00:03:27.634 "method": "framework_set_scheduler", 00:03:27.634 "params": { 00:03:27.634 "name": "static" 00:03:27.634 } 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "vhost_scsi", 00:03:27.634 "config": [] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "vhost_blk", 00:03:27.634 "config": [] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "ublk", 00:03:27.634 "config": [] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "nbd", 00:03:27.634 "config": [] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "nvmf", 00:03:27.634 "config": [ 00:03:27.634 { 00:03:27.634 "method": "nvmf_set_config", 00:03:27.634 "params": { 00:03:27.634 "discovery_filter": "match_any", 00:03:27.634 "admin_cmd_passthru": { 00:03:27.634 "identify_ctrlr": false 00:03:27.634 } 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "nvmf_set_max_subsystems", 00:03:27.634 "params": { 00:03:27.634 "max_subsystems": 1024 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "nvmf_set_crdt", 00:03:27.634 "params": { 00:03:27.634 "crdt1": 0, 00:03:27.634 "crdt2": 0, 00:03:27.634 "crdt3": 0 00:03:27.634 } 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "method": "nvmf_create_transport", 00:03:27.634 "params": { 00:03:27.634 "trtype": "TCP", 00:03:27.634 "max_queue_depth": 128, 00:03:27.634 "max_io_qpairs_per_ctrlr": 127, 00:03:27.634 "in_capsule_data_size": 4096, 00:03:27.634 "max_io_size": 131072, 00:03:27.634 "io_unit_size": 131072, 00:03:27.634 "max_aq_depth": 128, 00:03:27.634 "num_shared_buffers": 511, 00:03:27.634 "buf_cache_size": 4294967295, 00:03:27.634 "dif_insert_or_strip": false, 00:03:27.634 "zcopy": false, 00:03:27.634 "c2h_success": true, 00:03:27.634 "sock_priority": 0, 00:03:27.634 "abort_timeout_sec": 1, 00:03:27.634 "ack_timeout": 0 00:03:27.634 } 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 }, 00:03:27.634 { 00:03:27.634 "subsystem": "iscsi", 00:03:27.634 "config": [ 00:03:27.634 { 00:03:27.634 "method": "iscsi_set_options", 00:03:27.634 "params": { 00:03:27.634 "node_base": "iqn.2016-06.io.spdk", 00:03:27.634 "max_sessions": 128, 00:03:27.634 "max_connections_per_session": 2, 00:03:27.634 "max_queue_depth": 64, 00:03:27.634 "default_time2wait": 2, 00:03:27.634 "default_time2retain": 20, 00:03:27.634 "first_burst_length": 8192, 00:03:27.634 "immediate_data": true, 00:03:27.634 "allow_duplicated_isid": false, 00:03:27.634 "error_recovery_level": 0, 00:03:27.634 "nop_timeout": 60, 00:03:27.634 "nop_in_interval": 30, 00:03:27.634 "disable_chap": false, 00:03:27.634 "require_chap": false, 00:03:27.634 "mutual_chap": false, 00:03:27.634 "chap_group": 0, 00:03:27.634 "max_large_datain_per_connection": 64, 00:03:27.634 "max_r2t_per_connection": 4, 00:03:27.634 "pdu_pool_size": 36864, 00:03:27.634 "immediate_data_pool_size": 16384, 00:03:27.634 "data_out_pool_size": 2048 00:03:27.634 } 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 } 00:03:27.634 ] 00:03:27.634 } 00:03:27.634 03:17:05 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:27.634 03:17:05 -- rpc/skip_rpc.sh@40 -- # killprocess 130362 00:03:27.634 03:17:05 -- common/autotest_common.sh@936 -- # '[' -z 130362 ']' 00:03:27.634 03:17:05 -- common/autotest_common.sh@940 -- # kill -0 130362 00:03:27.634 03:17:05 -- common/autotest_common.sh@941 -- # uname 00:03:27.634 03:17:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:27.634 03:17:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130362 00:03:27.634 03:17:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:27.634 03:17:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:27.634 03:17:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130362' 00:03:27.634 killing process with pid 130362 00:03:27.634 03:17:05 -- common/autotest_common.sh@955 -- # kill 130362 00:03:27.634 03:17:05 -- common/autotest_common.sh@960 -- # wait 130362 00:03:28.201 03:17:05 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=130476 00:03:28.201 03:17:05 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:28.201 03:17:05 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:33.461 03:17:10 -- rpc/skip_rpc.sh@50 -- # killprocess 130476 00:03:33.461 03:17:10 -- common/autotest_common.sh@936 -- # '[' -z 130476 ']' 00:03:33.461 03:17:10 -- common/autotest_common.sh@940 -- # kill -0 130476 00:03:33.461 03:17:10 -- common/autotest_common.sh@941 -- # uname 00:03:33.461 03:17:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:33.461 03:17:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130476 00:03:33.461 03:17:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:33.461 03:17:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:33.461 03:17:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130476' 00:03:33.461 killing process with pid 130476 00:03:33.461 03:17:10 -- common/autotest_common.sh@955 -- # kill 130476 00:03:33.461 03:17:10 -- common/autotest_common.sh@960 -- # wait 130476 00:03:33.719 03:17:11 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:33.719 03:17:11 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:33.719 00:03:33.719 real 0m6.593s 00:03:33.719 user 0m6.170s 00:03:33.719 sys 0m0.689s 00:03:33.719 03:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.719 03:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.719 ************************************ 00:03:33.719 END TEST skip_rpc_with_json 00:03:33.719 ************************************ 00:03:33.719 03:17:11 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:33.719 03:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.719 03:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.719 03:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.719 ************************************ 00:03:33.719 START TEST skip_rpc_with_delay 00:03:33.719 ************************************ 00:03:33.719 03:17:11 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:03:33.719 03:17:11 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:33.719 03:17:11 -- common/autotest_common.sh@638 -- # local es=0 00:03:33.719 03:17:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:33.719 03:17:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.719 03:17:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:33.719 03:17:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.719 03:17:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:33.719 03:17:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.719 03:17:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:33.719 03:17:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.719 03:17:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:33.719 03:17:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:33.719 [2024-04-19 03:17:11.230808] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:33.719 [2024-04-19 03:17:11.230905] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:33.719 03:17:11 -- common/autotest_common.sh@641 -- # es=1 00:03:33.719 03:17:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:33.719 03:17:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:33.719 03:17:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:33.719 00:03:33.719 real 0m0.065s 00:03:33.719 user 0m0.038s 00:03:33.719 sys 0m0.026s 00:03:33.719 03:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.719 03:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.719 ************************************ 00:03:33.719 END TEST skip_rpc_with_delay 00:03:33.719 ************************************ 00:03:33.719 03:17:11 -- rpc/skip_rpc.sh@77 -- # uname 00:03:33.719 03:17:11 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:33.720 03:17:11 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:33.720 03:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.720 03:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.720 03:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.978 ************************************ 00:03:33.978 START TEST exit_on_failed_rpc_init 00:03:33.978 ************************************ 00:03:33.978 03:17:11 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:03:33.978 03:17:11 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=131310 00:03:33.978 03:17:11 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:33.978 03:17:11 -- rpc/skip_rpc.sh@63 -- # waitforlisten 131310 00:03:33.978 03:17:11 -- common/autotest_common.sh@817 -- # '[' -z 131310 ']' 00:03:33.978 03:17:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.978 03:17:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:33.978 03:17:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.978 03:17:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:33.978 03:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.978 [2024-04-19 03:17:11.412239] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:33.978 [2024-04-19 03:17:11.412323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131310 ] 00:03:33.978 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.978 [2024-04-19 03:17:11.472275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.237 [2024-04-19 03:17:11.594222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.495 03:17:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:34.495 03:17:11 -- common/autotest_common.sh@850 -- # return 0 00:03:34.495 03:17:11 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.495 03:17:11 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:34.495 03:17:11 -- common/autotest_common.sh@638 -- # local es=0 00:03:34.495 03:17:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:34.495 03:17:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:34.495 03:17:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:34.495 03:17:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:34.495 03:17:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:34.495 03:17:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:34.495 03:17:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:34.495 03:17:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:34.495 03:17:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:34.495 03:17:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:34.495 [2024-04-19 03:17:11.912298] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:34.495 [2024-04-19 03:17:11.912395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131315 ] 00:03:34.495 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.495 [2024-04-19 03:17:11.972198] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.753 [2024-04-19 03:17:12.090581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:34.753 [2024-04-19 03:17:12.090694] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:34.753 [2024-04-19 03:17:12.090717] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:34.753 [2024-04-19 03:17:12.090731] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:34.753 03:17:12 -- common/autotest_common.sh@641 -- # es=234 00:03:34.753 03:17:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:34.753 03:17:12 -- common/autotest_common.sh@650 -- # es=106 00:03:34.753 03:17:12 -- common/autotest_common.sh@651 -- # case "$es" in 00:03:34.753 03:17:12 -- common/autotest_common.sh@658 -- # es=1 00:03:34.753 03:17:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:34.753 03:17:12 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:34.754 03:17:12 -- rpc/skip_rpc.sh@70 -- # killprocess 131310 00:03:34.754 03:17:12 -- common/autotest_common.sh@936 -- # '[' -z 131310 ']' 00:03:34.754 03:17:12 -- common/autotest_common.sh@940 -- # kill -0 131310 00:03:34.754 03:17:12 -- common/autotest_common.sh@941 -- # uname 00:03:34.754 03:17:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:34.754 03:17:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131310 00:03:34.754 03:17:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:34.754 03:17:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:34.754 03:17:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131310' 00:03:34.754 killing process with pid 131310 00:03:34.754 03:17:12 -- common/autotest_common.sh@955 -- # kill 131310 00:03:34.754 03:17:12 -- common/autotest_common.sh@960 -- # wait 131310 00:03:35.321 00:03:35.321 real 0m1.348s 00:03:35.321 user 0m1.520s 00:03:35.321 sys 0m0.469s 00:03:35.321 03:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.321 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:35.321 ************************************ 00:03:35.321 END TEST exit_on_failed_rpc_init 00:03:35.321 ************************************ 00:03:35.321 03:17:12 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.321 00:03:35.321 real 0m14.010s 00:03:35.321 user 0m13.078s 00:03:35.321 sys 0m1.807s 00:03:35.321 03:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.321 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:35.321 ************************************ 00:03:35.321 END TEST skip_rpc 00:03:35.321 ************************************ 00:03:35.321 03:17:12 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:35.321 03:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.321 03:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.321 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:35.321 ************************************ 00:03:35.321 START TEST rpc_client 00:03:35.321 ************************************ 00:03:35.321 03:17:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:35.580 * Looking for test storage... 00:03:35.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:35.580 03:17:12 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:35.580 OK 00:03:35.580 03:17:12 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:35.580 00:03:35.580 real 0m0.074s 00:03:35.580 user 0m0.024s 00:03:35.580 sys 0m0.055s 00:03:35.580 03:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.580 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:35.580 ************************************ 00:03:35.580 END TEST rpc_client 00:03:35.580 ************************************ 00:03:35.580 03:17:12 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:35.580 03:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.580 03:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.580 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:35.580 ************************************ 00:03:35.580 START TEST json_config 00:03:35.580 ************************************ 00:03:35.580 03:17:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:35.580 03:17:13 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:35.580 03:17:13 -- nvmf/common.sh@7 -- # uname -s 00:03:35.580 03:17:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:35.580 03:17:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:35.580 03:17:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:35.580 03:17:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:35.580 03:17:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:35.580 03:17:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:35.580 03:17:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:35.580 03:17:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:35.580 03:17:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:35.580 03:17:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:35.580 03:17:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:35.580 03:17:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:35.580 03:17:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:35.580 03:17:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:35.580 03:17:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:35.580 03:17:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:35.580 03:17:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:35.580 03:17:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:35.580 03:17:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:35.580 03:17:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:35.580 03:17:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.580 03:17:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.580 03:17:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.580 03:17:13 -- paths/export.sh@5 -- # export PATH 00:03:35.580 03:17:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.580 03:17:13 -- nvmf/common.sh@47 -- # : 0 00:03:35.580 03:17:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:35.580 03:17:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:35.580 03:17:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:35.580 03:17:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:35.580 03:17:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:35.580 03:17:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:35.580 03:17:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:35.580 03:17:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:35.580 03:17:13 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:35.580 03:17:13 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:35.580 03:17:13 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:35.580 03:17:13 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:35.580 03:17:13 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:35.580 03:17:13 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:35.580 03:17:13 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:35.580 03:17:13 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:35.580 03:17:13 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:35.580 03:17:13 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:35.580 03:17:13 -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:35.580 03:17:13 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:35.580 03:17:13 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:35.580 03:17:13 -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:35.580 03:17:13 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:35.580 03:17:13 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:35.580 INFO: JSON configuration test init 00:03:35.580 03:17:13 -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:35.580 03:17:13 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:35.580 03:17:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:35.580 03:17:13 -- common/autotest_common.sh@10 -- # set +x 00:03:35.580 03:17:13 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:35.580 03:17:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:35.580 03:17:13 -- common/autotest_common.sh@10 -- # set +x 00:03:35.580 03:17:13 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:35.580 03:17:13 -- json_config/common.sh@9 -- # local app=target 00:03:35.580 03:17:13 -- json_config/common.sh@10 -- # shift 00:03:35.580 03:17:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:35.580 03:17:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:35.580 03:17:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:35.580 03:17:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:35.580 03:17:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:35.580 03:17:13 -- json_config/common.sh@22 -- # app_pid["$app"]=131577 00:03:35.580 03:17:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:35.580 03:17:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:35.580 Waiting for target to run... 00:03:35.580 03:17:13 -- json_config/common.sh@25 -- # waitforlisten 131577 /var/tmp/spdk_tgt.sock 00:03:35.580 03:17:13 -- common/autotest_common.sh@817 -- # '[' -z 131577 ']' 00:03:35.580 03:17:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:35.580 03:17:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:35.580 03:17:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:35.580 03:17:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:35.580 03:17:13 -- common/autotest_common.sh@10 -- # set +x 00:03:35.839 [2024-04-19 03:17:13.168885] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:35.839 [2024-04-19 03:17:13.168975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131577 ] 00:03:35.839 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.405 [2024-04-19 03:17:13.667193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.405 [2024-04-19 03:17:13.774624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.662 03:17:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:36.662 03:17:14 -- common/autotest_common.sh@850 -- # return 0 00:03:36.662 03:17:14 -- json_config/common.sh@26 -- # echo '' 00:03:36.662 00:03:36.662 03:17:14 -- json_config/json_config.sh@269 -- # create_accel_config 00:03:36.662 03:17:14 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:36.663 03:17:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:36.663 03:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:36.663 03:17:14 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:36.663 03:17:14 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:36.663 03:17:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:36.663 03:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:36.663 03:17:14 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:36.663 03:17:14 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:36.663 03:17:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:39.946 03:17:17 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:39.946 03:17:17 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:39.946 03:17:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:39.946 03:17:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.946 03:17:17 -- json_config/json_config.sh@45 -- # local ret=0 00:03:39.946 03:17:17 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:39.946 03:17:17 -- json_config/json_config.sh@46 -- # local enabled_types 00:03:39.946 03:17:17 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:39.946 03:17:17 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:39.946 03:17:17 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:40.204 03:17:17 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:40.204 03:17:17 -- json_config/json_config.sh@48 -- # local get_types 00:03:40.204 03:17:17 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:40.204 03:17:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:40.204 03:17:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.204 03:17:17 -- json_config/json_config.sh@55 -- # return 0 00:03:40.204 03:17:17 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:40.204 03:17:17 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:40.204 03:17:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:40.204 03:17:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.204 03:17:17 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:40.204 03:17:17 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:40.204 03:17:17 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:40.204 03:17:17 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:40.462 MallocForNvmf0 00:03:40.462 03:17:17 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:40.462 03:17:17 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:40.462 MallocForNvmf1 00:03:40.721 03:17:18 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:40.721 03:17:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:40.721 [2024-04-19 03:17:18.253940] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:40.721 03:17:18 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:40.721 03:17:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:40.979 03:17:18 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:40.979 03:17:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:41.264 03:17:18 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:41.264 03:17:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:41.522 03:17:18 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:41.522 03:17:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:41.780 [2024-04-19 03:17:19.209089] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:41.780 03:17:19 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:41.780 03:17:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:41.780 03:17:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.780 03:17:19 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:41.780 03:17:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:41.780 03:17:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.780 03:17:19 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:41.780 03:17:19 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:41.780 03:17:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.038 MallocBdevForConfigChangeCheck 00:03:42.038 03:17:19 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:42.038 03:17:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:42.038 03:17:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.038 03:17:19 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:42.038 03:17:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:42.604 03:17:19 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:42.604 INFO: shutting down applications... 00:03:42.604 03:17:19 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:42.604 03:17:19 -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:42.604 03:17:19 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:42.604 03:17:19 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:43.976 Calling clear_iscsi_subsystem 00:03:43.976 Calling clear_nvmf_subsystem 00:03:43.976 Calling clear_nbd_subsystem 00:03:43.976 Calling clear_ublk_subsystem 00:03:43.976 Calling clear_vhost_blk_subsystem 00:03:43.976 Calling clear_vhost_scsi_subsystem 00:03:43.976 Calling clear_bdev_subsystem 00:03:43.976 03:17:21 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:43.976 03:17:21 -- json_config/json_config.sh@343 -- # count=100 00:03:43.976 03:17:21 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:43.976 03:17:21 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.976 03:17:21 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:43.976 03:17:21 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:44.541 03:17:21 -- json_config/json_config.sh@345 -- # break 00:03:44.541 03:17:21 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:44.541 03:17:21 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:44.541 03:17:21 -- json_config/common.sh@31 -- # local app=target 00:03:44.541 03:17:21 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:44.541 03:17:21 -- json_config/common.sh@35 -- # [[ -n 131577 ]] 00:03:44.541 03:17:21 -- json_config/common.sh@38 -- # kill -SIGINT 131577 00:03:44.541 03:17:21 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:44.541 03:17:21 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:44.541 03:17:21 -- json_config/common.sh@41 -- # kill -0 131577 00:03:44.541 03:17:21 -- json_config/common.sh@45 -- # sleep 0.5 00:03:45.107 03:17:22 -- json_config/common.sh@40 -- # (( i++ )) 00:03:45.107 03:17:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:45.108 03:17:22 -- json_config/common.sh@41 -- # kill -0 131577 00:03:45.108 03:17:22 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:45.108 03:17:22 -- json_config/common.sh@43 -- # break 00:03:45.108 03:17:22 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:45.108 03:17:22 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:45.108 SPDK target shutdown done 00:03:45.108 03:17:22 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:45.108 INFO: relaunching applications... 00:03:45.108 03:17:22 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.108 03:17:22 -- json_config/common.sh@9 -- # local app=target 00:03:45.108 03:17:22 -- json_config/common.sh@10 -- # shift 00:03:45.108 03:17:22 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:45.108 03:17:22 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:45.108 03:17:22 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:45.108 03:17:22 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:45.108 03:17:22 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:45.108 03:17:22 -- json_config/common.sh@22 -- # app_pid["$app"]=132770 00:03:45.108 03:17:22 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.108 03:17:22 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:45.108 Waiting for target to run... 00:03:45.108 03:17:22 -- json_config/common.sh@25 -- # waitforlisten 132770 /var/tmp/spdk_tgt.sock 00:03:45.108 03:17:22 -- common/autotest_common.sh@817 -- # '[' -z 132770 ']' 00:03:45.108 03:17:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:45.108 03:17:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:45.108 03:17:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:45.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:45.108 03:17:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:45.108 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.108 [2024-04-19 03:17:22.444210] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:45.108 [2024-04-19 03:17:22.444310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132770 ] 00:03:45.108 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.673 [2024-04-19 03:17:22.963695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.674 [2024-04-19 03:17:23.070860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.954 [2024-04-19 03:17:26.099498] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.954 [2024-04-19 03:17:26.131973] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:49.520 03:17:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:49.520 03:17:26 -- common/autotest_common.sh@850 -- # return 0 00:03:49.520 03:17:26 -- json_config/common.sh@26 -- # echo '' 00:03:49.520 00:03:49.520 03:17:26 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:49.520 03:17:26 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:49.520 INFO: Checking if target configuration is the same... 00:03:49.520 03:17:26 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.520 03:17:26 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:49.520 03:17:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.520 + '[' 2 -ne 2 ']' 00:03:49.520 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:49.520 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:49.520 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.520 +++ basename /dev/fd/62 00:03:49.520 ++ mktemp /tmp/62.XXX 00:03:49.520 + tmp_file_1=/tmp/62.V2B 00:03:49.520 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.520 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:49.520 + tmp_file_2=/tmp/spdk_tgt_config.json.OvT 00:03:49.520 + ret=0 00:03:49.520 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:49.778 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:49.778 + diff -u /tmp/62.V2B /tmp/spdk_tgt_config.json.OvT 00:03:49.778 + echo 'INFO: JSON config files are the same' 00:03:49.778 INFO: JSON config files are the same 00:03:49.778 + rm /tmp/62.V2B /tmp/spdk_tgt_config.json.OvT 00:03:49.778 + exit 0 00:03:49.778 03:17:27 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:49.778 03:17:27 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:49.778 INFO: changing configuration and checking if this can be detected... 00:03:49.778 03:17:27 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:49.778 03:17:27 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:50.036 03:17:27 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.036 03:17:27 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:50.036 03:17:27 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.036 + '[' 2 -ne 2 ']' 00:03:50.036 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:50.036 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:50.036 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.036 +++ basename /dev/fd/62 00:03:50.036 ++ mktemp /tmp/62.XXX 00:03:50.036 + tmp_file_1=/tmp/62.7Nz 00:03:50.036 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.036 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:50.036 + tmp_file_2=/tmp/spdk_tgt_config.json.FLL 00:03:50.036 + ret=0 00:03:50.036 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:50.603 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:50.603 + diff -u /tmp/62.7Nz /tmp/spdk_tgt_config.json.FLL 00:03:50.603 + ret=1 00:03:50.603 + echo '=== Start of file: /tmp/62.7Nz ===' 00:03:50.603 + cat /tmp/62.7Nz 00:03:50.603 + echo '=== End of file: /tmp/62.7Nz ===' 00:03:50.603 + echo '' 00:03:50.603 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FLL ===' 00:03:50.603 + cat /tmp/spdk_tgt_config.json.FLL 00:03:50.603 + echo '=== End of file: /tmp/spdk_tgt_config.json.FLL ===' 00:03:50.603 + echo '' 00:03:50.603 + rm /tmp/62.7Nz /tmp/spdk_tgt_config.json.FLL 00:03:50.603 + exit 1 00:03:50.603 03:17:27 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:50.603 INFO: configuration change detected. 00:03:50.604 03:17:27 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:50.604 03:17:27 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:50.604 03:17:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:50.604 03:17:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.604 03:17:27 -- json_config/json_config.sh@307 -- # local ret=0 00:03:50.604 03:17:27 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:50.604 03:17:27 -- json_config/json_config.sh@317 -- # [[ -n 132770 ]] 00:03:50.604 03:17:27 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:50.604 03:17:27 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:50.604 03:17:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:50.604 03:17:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.604 03:17:27 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:50.604 03:17:27 -- json_config/json_config.sh@193 -- # uname -s 00:03:50.604 03:17:27 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:50.604 03:17:27 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:50.604 03:17:27 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:50.604 03:17:27 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:50.604 03:17:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:50.604 03:17:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.604 03:17:27 -- json_config/json_config.sh@323 -- # killprocess 132770 00:03:50.604 03:17:27 -- common/autotest_common.sh@936 -- # '[' -z 132770 ']' 00:03:50.604 03:17:27 -- common/autotest_common.sh@940 -- # kill -0 132770 00:03:50.604 03:17:27 -- common/autotest_common.sh@941 -- # uname 00:03:50.604 03:17:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:50.604 03:17:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132770 00:03:50.604 03:17:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:50.604 03:17:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:50.604 03:17:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132770' 00:03:50.604 killing process with pid 132770 00:03:50.604 03:17:28 -- common/autotest_common.sh@955 -- # kill 132770 00:03:50.604 03:17:28 -- common/autotest_common.sh@960 -- # wait 132770 00:03:52.502 03:17:29 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.502 03:17:29 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:52.502 03:17:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.502 03:17:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.502 03:17:29 -- json_config/json_config.sh@328 -- # return 0 00:03:52.502 03:17:29 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:52.502 INFO: Success 00:03:52.502 00:03:52.502 real 0m16.657s 00:03:52.502 user 0m18.322s 00:03:52.502 sys 0m2.217s 00:03:52.502 03:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.502 03:17:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.502 ************************************ 00:03:52.502 END TEST json_config 00:03:52.502 ************************************ 00:03:52.502 03:17:29 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:52.502 03:17:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.502 03:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.502 03:17:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.502 ************************************ 00:03:52.502 START TEST json_config_extra_key 00:03:52.502 ************************************ 00:03:52.502 03:17:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:52.502 03:17:29 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:52.502 03:17:29 -- nvmf/common.sh@7 -- # uname -s 00:03:52.502 03:17:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.502 03:17:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.502 03:17:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.502 03:17:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.502 03:17:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.502 03:17:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.502 03:17:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.502 03:17:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.502 03:17:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.502 03:17:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.502 03:17:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:52.502 03:17:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:52.502 03:17:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.502 03:17:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.502 03:17:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:52.502 03:17:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.502 03:17:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:52.502 03:17:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.502 03:17:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.502 03:17:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.502 03:17:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.502 03:17:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.502 03:17:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.502 03:17:29 -- paths/export.sh@5 -- # export PATH 00:03:52.503 03:17:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.503 03:17:29 -- nvmf/common.sh@47 -- # : 0 00:03:52.503 03:17:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:52.503 03:17:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:52.503 03:17:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.503 03:17:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.503 03:17:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.503 03:17:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:52.503 03:17:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:52.503 03:17:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:52.503 INFO: launching applications... 00:03:52.503 03:17:29 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:52.503 03:17:29 -- json_config/common.sh@9 -- # local app=target 00:03:52.503 03:17:29 -- json_config/common.sh@10 -- # shift 00:03:52.503 03:17:29 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.503 03:17:29 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.503 03:17:29 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:52.503 03:17:29 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.503 03:17:29 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.503 03:17:29 -- json_config/common.sh@22 -- # app_pid["$app"]=133821 00:03:52.503 03:17:29 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:52.503 03:17:29 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:52.503 Waiting for target to run... 00:03:52.503 03:17:29 -- json_config/common.sh@25 -- # waitforlisten 133821 /var/tmp/spdk_tgt.sock 00:03:52.503 03:17:29 -- common/autotest_common.sh@817 -- # '[' -z 133821 ']' 00:03:52.503 03:17:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:52.503 03:17:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:52.503 03:17:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:52.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:52.503 03:17:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:52.503 03:17:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.503 [2024-04-19 03:17:29.945357] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:52.503 [2024-04-19 03:17:29.945471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133821 ] 00:03:52.503 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.068 [2024-04-19 03:17:30.460305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.068 [2024-04-19 03:17:30.568365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.635 03:17:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:53.635 03:17:30 -- common/autotest_common.sh@850 -- # return 0 00:03:53.635 03:17:30 -- json_config/common.sh@26 -- # echo '' 00:03:53.635 00:03:53.635 03:17:30 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:53.635 INFO: shutting down applications... 00:03:53.635 03:17:30 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:53.635 03:17:30 -- json_config/common.sh@31 -- # local app=target 00:03:53.635 03:17:30 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:53.635 03:17:30 -- json_config/common.sh@35 -- # [[ -n 133821 ]] 00:03:53.635 03:17:30 -- json_config/common.sh@38 -- # kill -SIGINT 133821 00:03:53.635 03:17:30 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:53.635 03:17:30 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.635 03:17:30 -- json_config/common.sh@41 -- # kill -0 133821 00:03:53.635 03:17:30 -- json_config/common.sh@45 -- # sleep 0.5 00:03:53.893 03:17:31 -- json_config/common.sh@40 -- # (( i++ )) 00:03:53.893 03:17:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.893 03:17:31 -- json_config/common.sh@41 -- # kill -0 133821 00:03:53.893 03:17:31 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:53.893 03:17:31 -- json_config/common.sh@43 -- # break 00:03:53.893 03:17:31 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:53.893 03:17:31 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:53.893 SPDK target shutdown done 00:03:53.893 03:17:31 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:53.893 Success 00:03:53.893 00:03:53.893 real 0m1.555s 00:03:53.893 user 0m1.193s 00:03:53.893 sys 0m0.607s 00:03:53.893 03:17:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:53.893 03:17:31 -- common/autotest_common.sh@10 -- # set +x 00:03:53.893 ************************************ 00:03:53.893 END TEST json_config_extra_key 00:03:53.893 ************************************ 00:03:53.893 03:17:31 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:53.893 03:17:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.893 03:17:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.893 03:17:31 -- common/autotest_common.sh@10 -- # set +x 00:03:54.151 ************************************ 00:03:54.151 START TEST alias_rpc 00:03:54.151 ************************************ 00:03:54.151 03:17:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:54.151 * Looking for test storage... 00:03:54.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:54.151 03:17:31 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:54.151 03:17:31 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=134060 00:03:54.151 03:17:31 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.151 03:17:31 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 134060 00:03:54.151 03:17:31 -- common/autotest_common.sh@817 -- # '[' -z 134060 ']' 00:03:54.151 03:17:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.151 03:17:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:54.151 03:17:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.151 03:17:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:54.151 03:17:31 -- common/autotest_common.sh@10 -- # set +x 00:03:54.151 [2024-04-19 03:17:31.626452] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:54.151 [2024-04-19 03:17:31.626542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134060 ] 00:03:54.151 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.151 [2024-04-19 03:17:31.684902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.409 [2024-04-19 03:17:31.791445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.667 03:17:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:54.667 03:17:32 -- common/autotest_common.sh@850 -- # return 0 00:03:54.667 03:17:32 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:54.925 03:17:32 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 134060 00:03:54.925 03:17:32 -- common/autotest_common.sh@936 -- # '[' -z 134060 ']' 00:03:54.925 03:17:32 -- common/autotest_common.sh@940 -- # kill -0 134060 00:03:54.925 03:17:32 -- common/autotest_common.sh@941 -- # uname 00:03:54.925 03:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:54.925 03:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134060 00:03:54.925 03:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:54.925 03:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:54.925 03:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134060' 00:03:54.925 killing process with pid 134060 00:03:54.925 03:17:32 -- common/autotest_common.sh@955 -- # kill 134060 00:03:54.925 03:17:32 -- common/autotest_common.sh@960 -- # wait 134060 00:03:55.492 00:03:55.492 real 0m1.293s 00:03:55.492 user 0m1.365s 00:03:55.492 sys 0m0.417s 00:03:55.492 03:17:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.492 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:03:55.492 ************************************ 00:03:55.492 END TEST alias_rpc 00:03:55.492 ************************************ 00:03:55.492 03:17:32 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:03:55.492 03:17:32 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:55.492 03:17:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.492 03:17:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.492 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:03:55.492 ************************************ 00:03:55.492 START TEST spdkcli_tcp 00:03:55.492 ************************************ 00:03:55.492 03:17:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:55.492 * Looking for test storage... 00:03:55.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:55.492 03:17:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:55.492 03:17:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:55.492 03:17:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:55.492 03:17:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=134333 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:55.492 03:17:33 -- spdkcli/tcp.sh@27 -- # waitforlisten 134333 00:03:55.492 03:17:33 -- common/autotest_common.sh@817 -- # '[' -z 134333 ']' 00:03:55.492 03:17:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.492 03:17:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:55.492 03:17:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.492 03:17:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:55.492 03:17:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.774 [2024-04-19 03:17:33.053169] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:55.774 [2024-04-19 03:17:33.053254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134333 ] 00:03:55.774 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.774 [2024-04-19 03:17:33.112823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:55.774 [2024-04-19 03:17:33.228289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.774 [2024-04-19 03:17:33.228293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.035 03:17:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:56.035 03:17:33 -- common/autotest_common.sh@850 -- # return 0 00:03:56.035 03:17:33 -- spdkcli/tcp.sh@31 -- # socat_pid=134344 00:03:56.035 03:17:33 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:56.035 03:17:33 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:56.293 [ 00:03:56.293 "bdev_malloc_delete", 00:03:56.293 "bdev_malloc_create", 00:03:56.293 "bdev_null_resize", 00:03:56.293 "bdev_null_delete", 00:03:56.293 "bdev_null_create", 00:03:56.293 "bdev_nvme_cuse_unregister", 00:03:56.293 "bdev_nvme_cuse_register", 00:03:56.293 "bdev_opal_new_user", 00:03:56.293 "bdev_opal_set_lock_state", 00:03:56.293 "bdev_opal_delete", 00:03:56.293 "bdev_opal_get_info", 00:03:56.293 "bdev_opal_create", 00:03:56.293 "bdev_nvme_opal_revert", 00:03:56.293 "bdev_nvme_opal_init", 00:03:56.293 "bdev_nvme_send_cmd", 00:03:56.293 "bdev_nvme_get_path_iostat", 00:03:56.293 "bdev_nvme_get_mdns_discovery_info", 00:03:56.293 "bdev_nvme_stop_mdns_discovery", 00:03:56.293 "bdev_nvme_start_mdns_discovery", 00:03:56.293 "bdev_nvme_set_multipath_policy", 00:03:56.293 "bdev_nvme_set_preferred_path", 00:03:56.293 "bdev_nvme_get_io_paths", 00:03:56.293 "bdev_nvme_remove_error_injection", 00:03:56.293 "bdev_nvme_add_error_injection", 00:03:56.293 "bdev_nvme_get_discovery_info", 00:03:56.293 "bdev_nvme_stop_discovery", 00:03:56.293 "bdev_nvme_start_discovery", 00:03:56.293 "bdev_nvme_get_controller_health_info", 00:03:56.293 "bdev_nvme_disable_controller", 00:03:56.293 "bdev_nvme_enable_controller", 00:03:56.293 "bdev_nvme_reset_controller", 00:03:56.293 "bdev_nvme_get_transport_statistics", 00:03:56.293 "bdev_nvme_apply_firmware", 00:03:56.293 "bdev_nvme_detach_controller", 00:03:56.293 "bdev_nvme_get_controllers", 00:03:56.293 "bdev_nvme_attach_controller", 00:03:56.293 "bdev_nvme_set_hotplug", 00:03:56.293 "bdev_nvme_set_options", 00:03:56.293 "bdev_passthru_delete", 00:03:56.293 "bdev_passthru_create", 00:03:56.293 "bdev_lvol_grow_lvstore", 00:03:56.293 "bdev_lvol_get_lvols", 00:03:56.293 "bdev_lvol_get_lvstores", 00:03:56.293 "bdev_lvol_delete", 00:03:56.293 "bdev_lvol_set_read_only", 00:03:56.293 "bdev_lvol_resize", 00:03:56.293 "bdev_lvol_decouple_parent", 00:03:56.293 "bdev_lvol_inflate", 00:03:56.293 "bdev_lvol_rename", 00:03:56.293 "bdev_lvol_clone_bdev", 00:03:56.293 "bdev_lvol_clone", 00:03:56.293 "bdev_lvol_snapshot", 00:03:56.293 "bdev_lvol_create", 00:03:56.293 "bdev_lvol_delete_lvstore", 00:03:56.293 "bdev_lvol_rename_lvstore", 00:03:56.293 "bdev_lvol_create_lvstore", 00:03:56.293 "bdev_raid_set_options", 00:03:56.293 "bdev_raid_remove_base_bdev", 00:03:56.293 "bdev_raid_add_base_bdev", 00:03:56.293 "bdev_raid_delete", 00:03:56.293 "bdev_raid_create", 00:03:56.293 "bdev_raid_get_bdevs", 00:03:56.293 "bdev_error_inject_error", 00:03:56.293 "bdev_error_delete", 00:03:56.293 "bdev_error_create", 00:03:56.293 "bdev_split_delete", 00:03:56.293 "bdev_split_create", 00:03:56.293 "bdev_delay_delete", 00:03:56.293 "bdev_delay_create", 00:03:56.293 "bdev_delay_update_latency", 00:03:56.293 "bdev_zone_block_delete", 00:03:56.293 "bdev_zone_block_create", 00:03:56.293 "blobfs_create", 00:03:56.293 "blobfs_detect", 00:03:56.293 "blobfs_set_cache_size", 00:03:56.293 "bdev_aio_delete", 00:03:56.293 "bdev_aio_rescan", 00:03:56.293 "bdev_aio_create", 00:03:56.293 "bdev_ftl_set_property", 00:03:56.293 "bdev_ftl_get_properties", 00:03:56.293 "bdev_ftl_get_stats", 00:03:56.293 "bdev_ftl_unmap", 00:03:56.293 "bdev_ftl_unload", 00:03:56.293 "bdev_ftl_delete", 00:03:56.293 "bdev_ftl_load", 00:03:56.293 "bdev_ftl_create", 00:03:56.293 "bdev_virtio_attach_controller", 00:03:56.293 "bdev_virtio_scsi_get_devices", 00:03:56.293 "bdev_virtio_detach_controller", 00:03:56.293 "bdev_virtio_blk_set_hotplug", 00:03:56.293 "bdev_iscsi_delete", 00:03:56.294 "bdev_iscsi_create", 00:03:56.294 "bdev_iscsi_set_options", 00:03:56.294 "accel_error_inject_error", 00:03:56.294 "ioat_scan_accel_module", 00:03:56.294 "dsa_scan_accel_module", 00:03:56.294 "iaa_scan_accel_module", 00:03:56.294 "vfu_virtio_create_scsi_endpoint", 00:03:56.294 "vfu_virtio_scsi_remove_target", 00:03:56.294 "vfu_virtio_scsi_add_target", 00:03:56.294 "vfu_virtio_create_blk_endpoint", 00:03:56.294 "vfu_virtio_delete_endpoint", 00:03:56.294 "keyring_file_remove_key", 00:03:56.294 "keyring_file_add_key", 00:03:56.294 "iscsi_set_options", 00:03:56.294 "iscsi_get_auth_groups", 00:03:56.294 "iscsi_auth_group_remove_secret", 00:03:56.294 "iscsi_auth_group_add_secret", 00:03:56.294 "iscsi_delete_auth_group", 00:03:56.294 "iscsi_create_auth_group", 00:03:56.294 "iscsi_set_discovery_auth", 00:03:56.294 "iscsi_get_options", 00:03:56.294 "iscsi_target_node_request_logout", 00:03:56.294 "iscsi_target_node_set_redirect", 00:03:56.294 "iscsi_target_node_set_auth", 00:03:56.294 "iscsi_target_node_add_lun", 00:03:56.294 "iscsi_get_stats", 00:03:56.294 "iscsi_get_connections", 00:03:56.294 "iscsi_portal_group_set_auth", 00:03:56.294 "iscsi_start_portal_group", 00:03:56.294 "iscsi_delete_portal_group", 00:03:56.294 "iscsi_create_portal_group", 00:03:56.294 "iscsi_get_portal_groups", 00:03:56.294 "iscsi_delete_target_node", 00:03:56.294 "iscsi_target_node_remove_pg_ig_maps", 00:03:56.294 "iscsi_target_node_add_pg_ig_maps", 00:03:56.294 "iscsi_create_target_node", 00:03:56.294 "iscsi_get_target_nodes", 00:03:56.294 "iscsi_delete_initiator_group", 00:03:56.294 "iscsi_initiator_group_remove_initiators", 00:03:56.294 "iscsi_initiator_group_add_initiators", 00:03:56.294 "iscsi_create_initiator_group", 00:03:56.294 "iscsi_get_initiator_groups", 00:03:56.294 "nvmf_set_crdt", 00:03:56.294 "nvmf_set_config", 00:03:56.294 "nvmf_set_max_subsystems", 00:03:56.294 "nvmf_subsystem_get_listeners", 00:03:56.294 "nvmf_subsystem_get_qpairs", 00:03:56.294 "nvmf_subsystem_get_controllers", 00:03:56.294 "nvmf_get_stats", 00:03:56.294 "nvmf_get_transports", 00:03:56.294 "nvmf_create_transport", 00:03:56.294 "nvmf_get_targets", 00:03:56.294 "nvmf_delete_target", 00:03:56.294 "nvmf_create_target", 00:03:56.294 "nvmf_subsystem_allow_any_host", 00:03:56.294 "nvmf_subsystem_remove_host", 00:03:56.294 "nvmf_subsystem_add_host", 00:03:56.294 "nvmf_ns_remove_host", 00:03:56.294 "nvmf_ns_add_host", 00:03:56.294 "nvmf_subsystem_remove_ns", 00:03:56.294 "nvmf_subsystem_add_ns", 00:03:56.294 "nvmf_subsystem_listener_set_ana_state", 00:03:56.294 "nvmf_discovery_get_referrals", 00:03:56.294 "nvmf_discovery_remove_referral", 00:03:56.294 "nvmf_discovery_add_referral", 00:03:56.294 "nvmf_subsystem_remove_listener", 00:03:56.294 "nvmf_subsystem_add_listener", 00:03:56.294 "nvmf_delete_subsystem", 00:03:56.294 "nvmf_create_subsystem", 00:03:56.294 "nvmf_get_subsystems", 00:03:56.294 "env_dpdk_get_mem_stats", 00:03:56.294 "nbd_get_disks", 00:03:56.294 "nbd_stop_disk", 00:03:56.294 "nbd_start_disk", 00:03:56.294 "ublk_recover_disk", 00:03:56.294 "ublk_get_disks", 00:03:56.294 "ublk_stop_disk", 00:03:56.294 "ublk_start_disk", 00:03:56.294 "ublk_destroy_target", 00:03:56.294 "ublk_create_target", 00:03:56.294 "virtio_blk_create_transport", 00:03:56.294 "virtio_blk_get_transports", 00:03:56.294 "vhost_controller_set_coalescing", 00:03:56.294 "vhost_get_controllers", 00:03:56.294 "vhost_delete_controller", 00:03:56.294 "vhost_create_blk_controller", 00:03:56.294 "vhost_scsi_controller_remove_target", 00:03:56.294 "vhost_scsi_controller_add_target", 00:03:56.294 "vhost_start_scsi_controller", 00:03:56.294 "vhost_create_scsi_controller", 00:03:56.294 "thread_set_cpumask", 00:03:56.294 "framework_get_scheduler", 00:03:56.294 "framework_set_scheduler", 00:03:56.294 "framework_get_reactors", 00:03:56.294 "thread_get_io_channels", 00:03:56.294 "thread_get_pollers", 00:03:56.294 "thread_get_stats", 00:03:56.294 "framework_monitor_context_switch", 00:03:56.294 "spdk_kill_instance", 00:03:56.294 "log_enable_timestamps", 00:03:56.294 "log_get_flags", 00:03:56.294 "log_clear_flag", 00:03:56.294 "log_set_flag", 00:03:56.294 "log_get_level", 00:03:56.294 "log_set_level", 00:03:56.294 "log_get_print_level", 00:03:56.294 "log_set_print_level", 00:03:56.294 "framework_enable_cpumask_locks", 00:03:56.294 "framework_disable_cpumask_locks", 00:03:56.294 "framework_wait_init", 00:03:56.294 "framework_start_init", 00:03:56.294 "scsi_get_devices", 00:03:56.294 "bdev_get_histogram", 00:03:56.294 "bdev_enable_histogram", 00:03:56.294 "bdev_set_qos_limit", 00:03:56.294 "bdev_set_qd_sampling_period", 00:03:56.294 "bdev_get_bdevs", 00:03:56.294 "bdev_reset_iostat", 00:03:56.294 "bdev_get_iostat", 00:03:56.294 "bdev_examine", 00:03:56.294 "bdev_wait_for_examine", 00:03:56.294 "bdev_set_options", 00:03:56.294 "notify_get_notifications", 00:03:56.294 "notify_get_types", 00:03:56.294 "accel_get_stats", 00:03:56.294 "accel_set_options", 00:03:56.294 "accel_set_driver", 00:03:56.294 "accel_crypto_key_destroy", 00:03:56.294 "accel_crypto_keys_get", 00:03:56.294 "accel_crypto_key_create", 00:03:56.294 "accel_assign_opc", 00:03:56.294 "accel_get_module_info", 00:03:56.294 "accel_get_opc_assignments", 00:03:56.294 "vmd_rescan", 00:03:56.294 "vmd_remove_device", 00:03:56.294 "vmd_enable", 00:03:56.294 "sock_set_default_impl", 00:03:56.294 "sock_impl_set_options", 00:03:56.294 "sock_impl_get_options", 00:03:56.294 "iobuf_get_stats", 00:03:56.294 "iobuf_set_options", 00:03:56.294 "keyring_get_keys", 00:03:56.294 "framework_get_pci_devices", 00:03:56.294 "framework_get_config", 00:03:56.294 "framework_get_subsystems", 00:03:56.294 "vfu_tgt_set_base_path", 00:03:56.294 "trace_get_info", 00:03:56.294 "trace_get_tpoint_group_mask", 00:03:56.294 "trace_disable_tpoint_group", 00:03:56.294 "trace_enable_tpoint_group", 00:03:56.294 "trace_clear_tpoint_mask", 00:03:56.294 "trace_set_tpoint_mask", 00:03:56.294 "spdk_get_version", 00:03:56.294 "rpc_get_methods" 00:03:56.294 ] 00:03:56.294 03:17:33 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:56.294 03:17:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:56.294 03:17:33 -- common/autotest_common.sh@10 -- # set +x 00:03:56.294 03:17:33 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:56.294 03:17:33 -- spdkcli/tcp.sh@38 -- # killprocess 134333 00:03:56.294 03:17:33 -- common/autotest_common.sh@936 -- # '[' -z 134333 ']' 00:03:56.294 03:17:33 -- common/autotest_common.sh@940 -- # kill -0 134333 00:03:56.294 03:17:33 -- common/autotest_common.sh@941 -- # uname 00:03:56.294 03:17:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:56.294 03:17:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134333 00:03:56.294 03:17:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:56.294 03:17:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:56.294 03:17:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134333' 00:03:56.294 killing process with pid 134333 00:03:56.294 03:17:33 -- common/autotest_common.sh@955 -- # kill 134333 00:03:56.294 03:17:33 -- common/autotest_common.sh@960 -- # wait 134333 00:03:56.861 00:03:56.861 real 0m1.285s 00:03:56.861 user 0m2.226s 00:03:56.861 sys 0m0.476s 00:03:56.861 03:17:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.861 03:17:34 -- common/autotest_common.sh@10 -- # set +x 00:03:56.861 ************************************ 00:03:56.861 END TEST spdkcli_tcp 00:03:56.861 ************************************ 00:03:56.861 03:17:34 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:56.861 03:17:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.861 03:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.861 03:17:34 -- common/autotest_common.sh@10 -- # set +x 00:03:56.861 ************************************ 00:03:56.861 START TEST dpdk_mem_utility 00:03:56.861 ************************************ 00:03:56.861 03:17:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:56.861 * Looking for test storage... 00:03:56.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:56.861 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:56.861 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=134547 00:03:56.861 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.861 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 134547 00:03:56.861 03:17:34 -- common/autotest_common.sh@817 -- # '[' -z 134547 ']' 00:03:56.861 03:17:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.861 03:17:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:56.861 03:17:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.861 03:17:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:56.861 03:17:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 [2024-04-19 03:17:34.450886] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:57.120 [2024-04-19 03:17:34.450968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134547 ] 00:03:57.120 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.120 [2024-04-19 03:17:34.507097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.120 [2024-04-19 03:17:34.613809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.378 03:17:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:57.378 03:17:34 -- common/autotest_common.sh@850 -- # return 0 00:03:57.378 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:57.378 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:57.378 03:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:57.378 03:17:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.378 { 00:03:57.378 "filename": "/tmp/spdk_mem_dump.txt" 00:03:57.378 } 00:03:57.378 03:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:57.378 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:57.637 DPDK memory size 814.000000 MiB in 1 heap(s) 00:03:57.637 1 heaps totaling size 814.000000 MiB 00:03:57.637 size: 814.000000 MiB heap id: 0 00:03:57.637 end heaps---------- 00:03:57.637 8 mempools totaling size 598.116089 MiB 00:03:57.637 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:57.637 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:57.637 size: 84.521057 MiB name: bdev_io_134547 00:03:57.637 size: 51.011292 MiB name: evtpool_134547 00:03:57.637 size: 50.003479 MiB name: msgpool_134547 00:03:57.637 size: 21.763794 MiB name: PDU_Pool 00:03:57.637 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:57.637 size: 0.026123 MiB name: Session_Pool 00:03:57.637 end mempools------- 00:03:57.637 6 memzones totaling size 4.142822 MiB 00:03:57.637 size: 1.000366 MiB name: RG_ring_0_134547 00:03:57.637 size: 1.000366 MiB name: RG_ring_1_134547 00:03:57.637 size: 1.000366 MiB name: RG_ring_4_134547 00:03:57.637 size: 1.000366 MiB name: RG_ring_5_134547 00:03:57.637 size: 0.125366 MiB name: RG_ring_2_134547 00:03:57.637 size: 0.015991 MiB name: RG_ring_3_134547 00:03:57.637 end memzones------- 00:03:57.637 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:57.637 heap id: 0 total size: 814.000000 MiB number of busy elements: 42 number of free elements: 15 00:03:57.637 list of free elements. size: 12.517212 MiB 00:03:57.637 element at address: 0x200000400000 with size: 1.999512 MiB 00:03:57.637 element at address: 0x200018e00000 with size: 0.999878 MiB 00:03:57.637 element at address: 0x200019000000 with size: 0.999878 MiB 00:03:57.637 element at address: 0x200003e00000 with size: 0.996277 MiB 00:03:57.637 element at address: 0x200031c00000 with size: 0.994446 MiB 00:03:57.637 element at address: 0x200013800000 with size: 0.978699 MiB 00:03:57.637 element at address: 0x200007000000 with size: 0.959839 MiB 00:03:57.637 element at address: 0x200019200000 with size: 0.936584 MiB 00:03:57.637 element at address: 0x200000200000 with size: 0.841614 MiB 00:03:57.637 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:03:57.637 element at address: 0x20000b200000 with size: 0.490723 MiB 00:03:57.637 element at address: 0x200000800000 with size: 0.487793 MiB 00:03:57.637 element at address: 0x200019400000 with size: 0.485657 MiB 00:03:57.637 element at address: 0x200027e00000 with size: 0.410034 MiB 00:03:57.637 element at address: 0x200003a00000 with size: 0.353394 MiB 00:03:57.637 list of standard malloc elements. size: 199.220215 MiB 00:03:57.637 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:03:57.637 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:03:57.637 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:57.637 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:03:57.637 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:57.637 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:57.637 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:03:57.637 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:57.637 element at address: 0x200003aff280 with size: 0.002136 MiB 00:03:57.637 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:03:57.637 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003a5a780 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003adaa40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003adac40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003adef00 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003aff1c0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003affb40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:03:57.637 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:03:57.637 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200027e69040 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:03:57.637 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:03:57.637 list of memzone associated elements. size: 602.262573 MiB 00:03:57.637 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:03:57.637 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:57.637 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:03:57.637 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:57.637 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:03:57.637 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_134547_0 00:03:57.637 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:03:57.637 associated memzone info: size: 48.002930 MiB name: MP_evtpool_134547_0 00:03:57.637 element at address: 0x200003fff380 with size: 48.003052 MiB 00:03:57.637 associated memzone info: size: 48.002930 MiB name: MP_msgpool_134547_0 00:03:57.637 element at address: 0x2000195be940 with size: 20.255554 MiB 00:03:57.637 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:57.637 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:03:57.637 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:57.637 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:03:57.637 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_134547 00:03:57.637 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:03:57.637 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_134547 00:03:57.637 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:57.637 associated memzone info: size: 1.007996 MiB name: MP_evtpool_134547 00:03:57.637 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:03:57.637 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:57.637 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:03:57.637 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:57.637 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:03:57.637 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:57.637 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:03:57.637 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:57.637 element at address: 0x200003eff180 with size: 1.000488 MiB 00:03:57.637 associated memzone info: size: 1.000366 MiB name: RG_ring_0_134547 00:03:57.637 element at address: 0x200003affc00 with size: 1.000488 MiB 00:03:57.637 associated memzone info: size: 1.000366 MiB name: RG_ring_1_134547 00:03:57.637 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:03:57.637 associated memzone info: size: 1.000366 MiB name: RG_ring_4_134547 00:03:57.637 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:03:57.637 associated memzone info: size: 1.000366 MiB name: RG_ring_5_134547 00:03:57.637 element at address: 0x200003a5a840 with size: 0.500488 MiB 00:03:57.637 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_134547 00:03:57.637 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:03:57.637 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:57.637 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:03:57.637 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:57.637 element at address: 0x20001947c540 with size: 0.250488 MiB 00:03:57.637 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:57.637 element at address: 0x200003adefc0 with size: 0.125488 MiB 00:03:57.637 associated memzone info: size: 0.125366 MiB name: RG_ring_2_134547 00:03:57.637 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:03:57.637 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:57.637 element at address: 0x200027e69100 with size: 0.023743 MiB 00:03:57.637 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:57.637 element at address: 0x200003adad00 with size: 0.016113 MiB 00:03:57.637 associated memzone info: size: 0.015991 MiB name: RG_ring_3_134547 00:03:57.637 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:03:57.637 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:57.638 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:03:57.638 associated memzone info: size: 0.000183 MiB name: MP_msgpool_134547 00:03:57.638 element at address: 0x200003adab00 with size: 0.000305 MiB 00:03:57.638 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_134547 00:03:57.638 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:03:57.638 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:57.638 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:57.638 03:17:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 134547 00:03:57.638 03:17:34 -- common/autotest_common.sh@936 -- # '[' -z 134547 ']' 00:03:57.638 03:17:34 -- common/autotest_common.sh@940 -- # kill -0 134547 00:03:57.638 03:17:34 -- common/autotest_common.sh@941 -- # uname 00:03:57.638 03:17:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:57.638 03:17:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134547 00:03:57.638 03:17:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:57.638 03:17:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:57.638 03:17:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134547' 00:03:57.638 killing process with pid 134547 00:03:57.638 03:17:35 -- common/autotest_common.sh@955 -- # kill 134547 00:03:57.638 03:17:35 -- common/autotest_common.sh@960 -- # wait 134547 00:03:58.205 00:03:58.205 real 0m1.136s 00:03:58.205 user 0m1.066s 00:03:58.205 sys 0m0.419s 00:03:58.205 03:17:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:58.205 03:17:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.205 ************************************ 00:03:58.205 END TEST dpdk_mem_utility 00:03:58.205 ************************************ 00:03:58.205 03:17:35 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:58.205 03:17:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.205 03:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.205 03:17:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.205 ************************************ 00:03:58.205 START TEST event 00:03:58.205 ************************************ 00:03:58.205 03:17:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:58.205 * Looking for test storage... 00:03:58.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:58.205 03:17:35 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:58.205 03:17:35 -- bdev/nbd_common.sh@6 -- # set -e 00:03:58.205 03:17:35 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:58.205 03:17:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:03:58.205 03:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.205 03:17:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.463 ************************************ 00:03:58.463 START TEST event_perf 00:03:58.463 ************************************ 00:03:58.463 03:17:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:58.463 Running I/O for 1 seconds...[2024-04-19 03:17:35.776799] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:58.463 [2024-04-19 03:17:35.776861] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134754 ] 00:03:58.463 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.463 [2024-04-19 03:17:35.838675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:58.463 [2024-04-19 03:17:35.958510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:58.463 [2024-04-19 03:17:35.958568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:03:58.463 [2024-04-19 03:17:35.958686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:03:58.463 [2024-04-19 03:17:35.958689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.876 Running I/O for 1 seconds... 00:03:59.876 lcore 0: 236984 00:03:59.876 lcore 1: 236982 00:03:59.876 lcore 2: 236983 00:03:59.876 lcore 3: 236984 00:03:59.876 done. 00:03:59.876 00:03:59.876 real 0m1.321s 00:03:59.876 user 0m4.230s 00:03:59.876 sys 0m0.086s 00:03:59.876 03:17:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.876 03:17:37 -- common/autotest_common.sh@10 -- # set +x 00:03:59.876 ************************************ 00:03:59.876 END TEST event_perf 00:03:59.876 ************************************ 00:03:59.876 03:17:37 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:59.876 03:17:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:03:59.876 03:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.876 03:17:37 -- common/autotest_common.sh@10 -- # set +x 00:03:59.876 ************************************ 00:03:59.876 START TEST event_reactor 00:03:59.876 ************************************ 00:03:59.876 03:17:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:59.876 [2024-04-19 03:17:37.221753] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:03:59.876 [2024-04-19 03:17:37.221815] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134989 ] 00:03:59.876 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.876 [2024-04-19 03:17:37.284565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.876 [2024-04-19 03:17:37.401403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.256 test_start 00:04:01.256 oneshot 00:04:01.256 tick 100 00:04:01.256 tick 100 00:04:01.256 tick 250 00:04:01.256 tick 100 00:04:01.256 tick 100 00:04:01.256 tick 100 00:04:01.256 tick 250 00:04:01.256 tick 500 00:04:01.256 tick 100 00:04:01.256 tick 100 00:04:01.256 tick 250 00:04:01.256 tick 100 00:04:01.256 tick 100 00:04:01.256 test_end 00:04:01.256 00:04:01.256 real 0m1.314s 00:04:01.256 user 0m1.229s 00:04:01.256 sys 0m0.080s 00:04:01.256 03:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:01.256 03:17:38 -- common/autotest_common.sh@10 -- # set +x 00:04:01.256 ************************************ 00:04:01.256 END TEST event_reactor 00:04:01.256 ************************************ 00:04:01.256 03:17:38 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:01.256 03:17:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:01.256 03:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.256 03:17:38 -- common/autotest_common.sh@10 -- # set +x 00:04:01.256 ************************************ 00:04:01.256 START TEST event_reactor_perf 00:04:01.256 ************************************ 00:04:01.256 03:17:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:01.256 [2024-04-19 03:17:38.653810] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:01.256 [2024-04-19 03:17:38.653867] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135211 ] 00:04:01.256 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.256 [2024-04-19 03:17:38.714929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.517 [2024-04-19 03:17:38.832898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.457 test_start 00:04:02.457 test_end 00:04:02.457 Performance: 352930 events per second 00:04:02.457 00:04:02.457 real 0m1.314s 00:04:02.457 user 0m1.232s 00:04:02.457 sys 0m0.077s 00:04:02.457 03:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.457 03:17:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.457 ************************************ 00:04:02.457 END TEST event_reactor_perf 00:04:02.457 ************************************ 00:04:02.457 03:17:39 -- event/event.sh@49 -- # uname -s 00:04:02.457 03:17:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:02.457 03:17:39 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:02.457 03:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.457 03:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.457 03:17:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.716 ************************************ 00:04:02.716 START TEST event_scheduler 00:04:02.716 ************************************ 00:04:02.716 03:17:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:02.716 * Looking for test storage... 00:04:02.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:02.716 03:17:40 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:02.716 03:17:40 -- scheduler/scheduler.sh@35 -- # scheduler_pid=135400 00:04:02.716 03:17:40 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:02.716 03:17:40 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.716 03:17:40 -- scheduler/scheduler.sh@37 -- # waitforlisten 135400 00:04:02.716 03:17:40 -- common/autotest_common.sh@817 -- # '[' -z 135400 ']' 00:04:02.716 03:17:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.716 03:17:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:02.716 03:17:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.716 03:17:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:02.716 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.716 [2024-04-19 03:17:40.173070] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:02.716 [2024-04-19 03:17:40.173155] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135400 ] 00:04:02.716 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.716 [2024-04-19 03:17:40.233774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:02.975 [2024-04-19 03:17:40.344973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.975 [2024-04-19 03:17:40.345035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.975 [2024-04-19 03:17:40.345101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:02.975 [2024-04-19 03:17:40.345104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:02.975 03:17:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:02.975 03:17:40 -- common/autotest_common.sh@850 -- # return 0 00:04:02.975 03:17:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:02.975 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.976 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.976 POWER: Env isn't set yet! 00:04:02.976 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:02.976 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:02.976 POWER: Cannot get available frequencies of lcore 0 00:04:02.976 POWER: Attempting to initialise PSTAT power management... 00:04:02.976 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:02.976 POWER: Initialized successfully for lcore 0 power management 00:04:02.976 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:02.976 POWER: Initialized successfully for lcore 1 power management 00:04:02.976 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:02.976 POWER: Initialized successfully for lcore 2 power management 00:04:02.976 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:02.976 POWER: Initialized successfully for lcore 3 power management 00:04:02.976 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.976 03:17:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:02.976 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.976 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.976 [2024-04-19 03:17:40.520345] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:02.976 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.976 03:17:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:02.976 03:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.976 03:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.976 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 ************************************ 00:04:03.236 START TEST scheduler_create_thread 00:04:03.236 ************************************ 00:04:03.236 03:17:40 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 2 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 3 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 4 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 5 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 6 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 7 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 8 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 9 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 10 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.236 03:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.236 03:17:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:03.236 03:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.236 03:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:04.176 03:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:04.176 03:17:41 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:04.176 03:17:41 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:04.176 03:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:04.176 03:17:41 -- common/autotest_common.sh@10 -- # set +x 00:04:05.555 03:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:05.555 00:04:05.555 real 0m2.136s 00:04:05.555 user 0m0.010s 00:04:05.555 sys 0m0.004s 00:04:05.555 03:17:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.555 03:17:42 -- common/autotest_common.sh@10 -- # set +x 00:04:05.555 ************************************ 00:04:05.555 END TEST scheduler_create_thread 00:04:05.555 ************************************ 00:04:05.555 03:17:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:05.555 03:17:42 -- scheduler/scheduler.sh@46 -- # killprocess 135400 00:04:05.555 03:17:42 -- common/autotest_common.sh@936 -- # '[' -z 135400 ']' 00:04:05.555 03:17:42 -- common/autotest_common.sh@940 -- # kill -0 135400 00:04:05.555 03:17:42 -- common/autotest_common.sh@941 -- # uname 00:04:05.555 03:17:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:05.555 03:17:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135400 00:04:05.555 03:17:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:05.555 03:17:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:05.555 03:17:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135400' 00:04:05.555 killing process with pid 135400 00:04:05.555 03:17:42 -- common/autotest_common.sh@955 -- # kill 135400 00:04:05.555 03:17:42 -- common/autotest_common.sh@960 -- # wait 135400 00:04:05.813 [2024-04-19 03:17:43.233873] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:05.813 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:05.813 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:05.813 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:05.813 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:05.813 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:05.813 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:05.813 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:05.813 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:06.072 00:04:06.072 real 0m3.417s 00:04:06.072 user 0m4.810s 00:04:06.072 sys 0m0.376s 00:04:06.072 03:17:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.072 03:17:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.072 ************************************ 00:04:06.072 END TEST event_scheduler 00:04:06.072 ************************************ 00:04:06.072 03:17:43 -- event/event.sh@51 -- # modprobe -n nbd 00:04:06.072 03:17:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:06.072 03:17:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.072 03:17:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.072 03:17:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.332 ************************************ 00:04:06.332 START TEST app_repeat 00:04:06.332 ************************************ 00:04:06.332 03:17:43 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:06.332 03:17:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.332 03:17:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.332 03:17:43 -- event/event.sh@13 -- # local nbd_list 00:04:06.332 03:17:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:06.332 03:17:43 -- event/event.sh@14 -- # local bdev_list 00:04:06.332 03:17:43 -- event/event.sh@15 -- # local repeat_times=4 00:04:06.332 03:17:43 -- event/event.sh@17 -- # modprobe nbd 00:04:06.332 03:17:43 -- event/event.sh@19 -- # repeat_pid=135859 00:04:06.332 03:17:43 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:06.332 03:17:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.332 03:17:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 135859' 00:04:06.332 Process app_repeat pid: 135859 00:04:06.332 03:17:43 -- event/event.sh@23 -- # for i in {0..2} 00:04:06.332 03:17:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:06.332 spdk_app_start Round 0 00:04:06.332 03:17:43 -- event/event.sh@25 -- # waitforlisten 135859 /var/tmp/spdk-nbd.sock 00:04:06.332 03:17:43 -- common/autotest_common.sh@817 -- # '[' -z 135859 ']' 00:04:06.332 03:17:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:06.332 03:17:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:06.332 03:17:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:06.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:06.332 03:17:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:06.332 03:17:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.332 [2024-04-19 03:17:43.656338] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:06.332 [2024-04-19 03:17:43.656413] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135859 ] 00:04:06.332 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.332 [2024-04-19 03:17:43.720162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:06.332 [2024-04-19 03:17:43.836184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.332 [2024-04-19 03:17:43.836190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.652 03:17:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:06.652 03:17:43 -- common/autotest_common.sh@850 -- # return 0 00:04:06.652 03:17:43 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:06.652 Malloc0 00:04:06.910 03:17:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:07.169 Malloc1 00:04:07.169 03:17:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:07.169 03:17:44 -- bdev/nbd_common.sh@12 -- # local i 00:04:07.170 03:17:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:07.170 03:17:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.170 03:17:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:07.170 /dev/nbd0 00:04:07.427 03:17:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:07.427 03:17:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:07.427 03:17:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:07.427 03:17:44 -- common/autotest_common.sh@855 -- # local i 00:04:07.427 03:17:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:07.427 03:17:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:07.427 03:17:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:07.427 03:17:44 -- common/autotest_common.sh@859 -- # break 00:04:07.427 03:17:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:07.427 03:17:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:07.427 03:17:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.427 1+0 records in 00:04:07.427 1+0 records out 00:04:07.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159893 s, 25.6 MB/s 00:04:07.427 03:17:44 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.427 03:17:44 -- common/autotest_common.sh@872 -- # size=4096 00:04:07.427 03:17:44 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.427 03:17:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:07.427 03:17:44 -- common/autotest_common.sh@875 -- # return 0 00:04:07.427 03:17:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.427 03:17:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.427 03:17:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:07.427 /dev/nbd1 00:04:07.686 03:17:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:07.686 03:17:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:07.686 03:17:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:07.686 03:17:44 -- common/autotest_common.sh@855 -- # local i 00:04:07.686 03:17:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:07.686 03:17:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:07.686 03:17:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:07.686 03:17:44 -- common/autotest_common.sh@859 -- # break 00:04:07.686 03:17:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:07.686 03:17:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:07.686 03:17:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.686 1+0 records in 00:04:07.686 1+0 records out 00:04:07.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153513 s, 26.7 MB/s 00:04:07.686 03:17:45 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.686 03:17:45 -- common/autotest_common.sh@872 -- # size=4096 00:04:07.686 03:17:45 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.686 03:17:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:07.686 03:17:45 -- common/autotest_common.sh@875 -- # return 0 00:04:07.686 03:17:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.686 03:17:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.686 03:17:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:07.686 03:17:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.686 03:17:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:07.944 03:17:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:07.944 { 00:04:07.944 "nbd_device": "/dev/nbd0", 00:04:07.944 "bdev_name": "Malloc0" 00:04:07.944 }, 00:04:07.944 { 00:04:07.944 "nbd_device": "/dev/nbd1", 00:04:07.944 "bdev_name": "Malloc1" 00:04:07.944 } 00:04:07.944 ]' 00:04:07.944 03:17:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:07.944 { 00:04:07.944 "nbd_device": "/dev/nbd0", 00:04:07.944 "bdev_name": "Malloc0" 00:04:07.944 }, 00:04:07.944 { 00:04:07.944 "nbd_device": "/dev/nbd1", 00:04:07.944 "bdev_name": "Malloc1" 00:04:07.944 } 00:04:07.944 ]' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:07.945 /dev/nbd1' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:07.945 /dev/nbd1' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@65 -- # count=2 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@95 -- # count=2 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:07.945 256+0 records in 00:04:07.945 256+0 records out 00:04:07.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400084 s, 262 MB/s 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:07.945 256+0 records in 00:04:07.945 256+0 records out 00:04:07.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242072 s, 43.3 MB/s 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:07.945 256+0 records in 00:04:07.945 256+0 records out 00:04:07.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259697 s, 40.4 MB/s 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@51 -- # local i 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.945 03:17:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@41 -- # break 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@45 -- # return 0 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:08.203 03:17:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@41 -- # break 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@45 -- # return 0 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.461 03:17:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@65 -- # true 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@65 -- # count=0 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@104 -- # count=0 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:08.721 03:17:46 -- bdev/nbd_common.sh@109 -- # return 0 00:04:08.721 03:17:46 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:08.981 03:17:46 -- event/event.sh@35 -- # sleep 3 00:04:09.241 [2024-04-19 03:17:46.667820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.241 [2024-04-19 03:17:46.785677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.241 [2024-04-19 03:17:46.785677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.502 [2024-04-19 03:17:46.847683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:09.502 [2024-04-19 03:17:46.847789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:12.037 03:17:49 -- event/event.sh@23 -- # for i in {0..2} 00:04:12.037 03:17:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:12.037 spdk_app_start Round 1 00:04:12.037 03:17:49 -- event/event.sh@25 -- # waitforlisten 135859 /var/tmp/spdk-nbd.sock 00:04:12.037 03:17:49 -- common/autotest_common.sh@817 -- # '[' -z 135859 ']' 00:04:12.037 03:17:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:12.037 03:17:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:12.037 03:17:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:12.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:12.037 03:17:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:12.037 03:17:49 -- common/autotest_common.sh@10 -- # set +x 00:04:12.295 03:17:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:12.295 03:17:49 -- common/autotest_common.sh@850 -- # return 0 00:04:12.295 03:17:49 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.554 Malloc0 00:04:12.554 03:17:49 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.810 Malloc1 00:04:12.810 03:17:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@12 -- # local i 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.810 03:17:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:13.068 /dev/nbd0 00:04:13.068 03:17:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:13.068 03:17:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:13.068 03:17:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:13.068 03:17:50 -- common/autotest_common.sh@855 -- # local i 00:04:13.068 03:17:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:13.068 03:17:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:13.068 03:17:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:13.068 03:17:50 -- common/autotest_common.sh@859 -- # break 00:04:13.068 03:17:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:13.068 03:17:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:13.068 03:17:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.068 1+0 records in 00:04:13.068 1+0 records out 00:04:13.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020278 s, 20.2 MB/s 00:04:13.068 03:17:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.068 03:17:50 -- common/autotest_common.sh@872 -- # size=4096 00:04:13.068 03:17:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.068 03:17:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:13.068 03:17:50 -- common/autotest_common.sh@875 -- # return 0 00:04:13.068 03:17:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.068 03:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.068 03:17:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.325 /dev/nbd1 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.325 03:17:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:13.325 03:17:50 -- common/autotest_common.sh@855 -- # local i 00:04:13.325 03:17:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:13.325 03:17:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:13.325 03:17:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:13.325 03:17:50 -- common/autotest_common.sh@859 -- # break 00:04:13.325 03:17:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:13.325 03:17:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:13.325 03:17:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.325 1+0 records in 00:04:13.325 1+0 records out 00:04:13.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176786 s, 23.2 MB/s 00:04:13.325 03:17:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.325 03:17:50 -- common/autotest_common.sh@872 -- # size=4096 00:04:13.325 03:17:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.325 03:17:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:13.325 03:17:50 -- common/autotest_common.sh@875 -- # return 0 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.325 03:17:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.583 { 00:04:13.583 "nbd_device": "/dev/nbd0", 00:04:13.583 "bdev_name": "Malloc0" 00:04:13.583 }, 00:04:13.583 { 00:04:13.583 "nbd_device": "/dev/nbd1", 00:04:13.583 "bdev_name": "Malloc1" 00:04:13.583 } 00:04:13.583 ]' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.583 { 00:04:13.583 "nbd_device": "/dev/nbd0", 00:04:13.583 "bdev_name": "Malloc0" 00:04:13.583 }, 00:04:13.583 { 00:04:13.583 "nbd_device": "/dev/nbd1", 00:04:13.583 "bdev_name": "Malloc1" 00:04:13.583 } 00:04:13.583 ]' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.583 /dev/nbd1' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.583 /dev/nbd1' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.583 256+0 records in 00:04:13.583 256+0 records out 00:04:13.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038161 s, 275 MB/s 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.583 256+0 records in 00:04:13.583 256+0 records out 00:04:13.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236656 s, 44.3 MB/s 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.583 256+0 records in 00:04:13.583 256+0 records out 00:04:13.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262018 s, 40.0 MB/s 00:04:13.583 03:17:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@51 -- # local i 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.584 03:17:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@41 -- # break 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.842 03:17:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@41 -- # break 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.100 03:17:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@65 -- # true 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.358 03:17:51 -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.358 03:17:51 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.617 03:17:52 -- event/event.sh@35 -- # sleep 3 00:04:15.185 [2024-04-19 03:17:52.435735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.185 [2024-04-19 03:17:52.539630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.185 [2024-04-19 03:17:52.539634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.185 [2024-04-19 03:17:52.602335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:15.185 [2024-04-19 03:17:52.602438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.765 03:17:55 -- event/event.sh@23 -- # for i in {0..2} 00:04:17.765 03:17:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:17.765 spdk_app_start Round 2 00:04:17.765 03:17:55 -- event/event.sh@25 -- # waitforlisten 135859 /var/tmp/spdk-nbd.sock 00:04:17.765 03:17:55 -- common/autotest_common.sh@817 -- # '[' -z 135859 ']' 00:04:17.765 03:17:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.765 03:17:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:17.765 03:17:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.765 03:17:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:17.765 03:17:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.023 03:17:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:18.023 03:17:55 -- common/autotest_common.sh@850 -- # return 0 00:04:18.023 03:17:55 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.281 Malloc0 00:04:18.281 03:17:55 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.539 Malloc1 00:04:18.539 03:17:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@12 -- # local i 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.539 03:17:55 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.797 /dev/nbd0 00:04:18.797 03:17:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.797 03:17:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.797 03:17:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:18.797 03:17:56 -- common/autotest_common.sh@855 -- # local i 00:04:18.797 03:17:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:18.797 03:17:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:18.797 03:17:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:18.797 03:17:56 -- common/autotest_common.sh@859 -- # break 00:04:18.797 03:17:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:18.797 03:17:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:18.797 03:17:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.797 1+0 records in 00:04:18.797 1+0 records out 00:04:18.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150487 s, 27.2 MB/s 00:04:18.797 03:17:56 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.797 03:17:56 -- common/autotest_common.sh@872 -- # size=4096 00:04:18.797 03:17:56 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.797 03:17:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:18.797 03:17:56 -- common/autotest_common.sh@875 -- # return 0 00:04:18.797 03:17:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.797 03:17:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.797 03:17:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.055 /dev/nbd1 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.055 03:17:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:19.055 03:17:56 -- common/autotest_common.sh@855 -- # local i 00:04:19.055 03:17:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:19.055 03:17:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:19.055 03:17:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:19.055 03:17:56 -- common/autotest_common.sh@859 -- # break 00:04:19.055 03:17:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:19.055 03:17:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:19.055 03:17:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.055 1+0 records in 00:04:19.055 1+0 records out 00:04:19.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236436 s, 17.3 MB/s 00:04:19.055 03:17:56 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.055 03:17:56 -- common/autotest_common.sh@872 -- # size=4096 00:04:19.055 03:17:56 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.055 03:17:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:19.055 03:17:56 -- common/autotest_common.sh@875 -- # return 0 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.055 03:17:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.313 { 00:04:19.313 "nbd_device": "/dev/nbd0", 00:04:19.313 "bdev_name": "Malloc0" 00:04:19.313 }, 00:04:19.313 { 00:04:19.313 "nbd_device": "/dev/nbd1", 00:04:19.313 "bdev_name": "Malloc1" 00:04:19.313 } 00:04:19.313 ]' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.313 { 00:04:19.313 "nbd_device": "/dev/nbd0", 00:04:19.313 "bdev_name": "Malloc0" 00:04:19.313 }, 00:04:19.313 { 00:04:19.313 "nbd_device": "/dev/nbd1", 00:04:19.313 "bdev_name": "Malloc1" 00:04:19.313 } 00:04:19.313 ]' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.313 /dev/nbd1' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.313 /dev/nbd1' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.313 256+0 records in 00:04:19.313 256+0 records out 00:04:19.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048917 s, 214 MB/s 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.313 256+0 records in 00:04:19.313 256+0 records out 00:04:19.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215011 s, 48.8 MB/s 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.313 256+0 records in 00:04:19.313 256+0 records out 00:04:19.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257092 s, 40.8 MB/s 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.313 03:17:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@51 -- # local i 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.314 03:17:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@41 -- # break 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.572 03:17:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@41 -- # break 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.830 03:17:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@65 -- # true 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.088 03:17:57 -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.088 03:17:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.347 03:17:57 -- event/event.sh@35 -- # sleep 3 00:04:20.607 [2024-04-19 03:17:58.117900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.867 [2024-04-19 03:17:58.207590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.867 [2024-04-19 03:17:58.207594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.867 [2024-04-19 03:17:58.269607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.867 [2024-04-19 03:17:58.269683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.404 03:18:00 -- event/event.sh@38 -- # waitforlisten 135859 /var/tmp/spdk-nbd.sock 00:04:23.404 03:18:00 -- common/autotest_common.sh@817 -- # '[' -z 135859 ']' 00:04:23.404 03:18:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.404 03:18:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:23.404 03:18:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.404 03:18:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:23.404 03:18:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.662 03:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:23.662 03:18:01 -- common/autotest_common.sh@850 -- # return 0 00:04:23.662 03:18:01 -- event/event.sh@39 -- # killprocess 135859 00:04:23.662 03:18:01 -- common/autotest_common.sh@936 -- # '[' -z 135859 ']' 00:04:23.662 03:18:01 -- common/autotest_common.sh@940 -- # kill -0 135859 00:04:23.662 03:18:01 -- common/autotest_common.sh@941 -- # uname 00:04:23.662 03:18:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.662 03:18:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135859 00:04:23.662 03:18:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:23.662 03:18:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:23.662 03:18:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135859' 00:04:23.662 killing process with pid 135859 00:04:23.662 03:18:01 -- common/autotest_common.sh@955 -- # kill 135859 00:04:23.662 03:18:01 -- common/autotest_common.sh@960 -- # wait 135859 00:04:23.922 spdk_app_start is called in Round 0. 00:04:23.922 Shutdown signal received, stop current app iteration 00:04:23.922 Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 reinitialization... 00:04:23.922 spdk_app_start is called in Round 1. 00:04:23.922 Shutdown signal received, stop current app iteration 00:04:23.922 Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 reinitialization... 00:04:23.922 spdk_app_start is called in Round 2. 00:04:23.922 Shutdown signal received, stop current app iteration 00:04:23.922 Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 reinitialization... 00:04:23.922 spdk_app_start is called in Round 3. 00:04:23.922 Shutdown signal received, stop current app iteration 00:04:23.922 03:18:01 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:23.922 03:18:01 -- event/event.sh@42 -- # return 0 00:04:23.922 00:04:23.922 real 0m17.738s 00:04:23.922 user 0m38.809s 00:04:23.922 sys 0m3.243s 00:04:23.922 03:18:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.922 03:18:01 -- common/autotest_common.sh@10 -- # set +x 00:04:23.922 ************************************ 00:04:23.922 END TEST app_repeat 00:04:23.922 ************************************ 00:04:23.922 03:18:01 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:23.922 03:18:01 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:23.922 03:18:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.922 03:18:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.922 03:18:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.181 ************************************ 00:04:24.181 START TEST cpu_locks 00:04:24.181 ************************************ 00:04:24.181 03:18:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:24.181 * Looking for test storage... 00:04:24.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:24.181 03:18:01 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:24.181 03:18:01 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:24.181 03:18:01 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:24.181 03:18:01 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:24.181 03:18:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.181 03:18:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.181 03:18:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.181 ************************************ 00:04:24.181 START TEST default_locks 00:04:24.181 ************************************ 00:04:24.181 03:18:01 -- common/autotest_common.sh@1111 -- # default_locks 00:04:24.181 03:18:01 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=138320 00:04:24.181 03:18:01 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.181 03:18:01 -- event/cpu_locks.sh@47 -- # waitforlisten 138320 00:04:24.181 03:18:01 -- common/autotest_common.sh@817 -- # '[' -z 138320 ']' 00:04:24.181 03:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.181 03:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.181 03:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.181 03:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.181 03:18:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.181 [2024-04-19 03:18:01.689623] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:24.181 [2024-04-19 03:18:01.689728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138320 ] 00:04:24.181 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.441 [2024-04-19 03:18:01.755041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.441 [2024-04-19 03:18:01.876981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.378 03:18:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:25.378 03:18:02 -- common/autotest_common.sh@850 -- # return 0 00:04:25.378 03:18:02 -- event/cpu_locks.sh@49 -- # locks_exist 138320 00:04:25.378 03:18:02 -- event/cpu_locks.sh@22 -- # lslocks -p 138320 00:04:25.378 03:18:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:25.637 lslocks: write error 00:04:25.637 03:18:02 -- event/cpu_locks.sh@50 -- # killprocess 138320 00:04:25.637 03:18:02 -- common/autotest_common.sh@936 -- # '[' -z 138320 ']' 00:04:25.637 03:18:02 -- common/autotest_common.sh@940 -- # kill -0 138320 00:04:25.637 03:18:02 -- common/autotest_common.sh@941 -- # uname 00:04:25.637 03:18:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.637 03:18:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138320 00:04:25.637 03:18:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.637 03:18:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.637 03:18:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138320' 00:04:25.637 killing process with pid 138320 00:04:25.637 03:18:02 -- common/autotest_common.sh@955 -- # kill 138320 00:04:25.637 03:18:02 -- common/autotest_common.sh@960 -- # wait 138320 00:04:25.898 03:18:03 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 138320 00:04:25.898 03:18:03 -- common/autotest_common.sh@638 -- # local es=0 00:04:25.898 03:18:03 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 138320 00:04:25.898 03:18:03 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:25.898 03:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.898 03:18:03 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:25.898 03:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.898 03:18:03 -- common/autotest_common.sh@641 -- # waitforlisten 138320 00:04:25.898 03:18:03 -- common/autotest_common.sh@817 -- # '[' -z 138320 ']' 00:04:25.898 03:18:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.898 03:18:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:25.898 03:18:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.898 03:18:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:25.898 03:18:03 -- common/autotest_common.sh@10 -- # set +x 00:04:25.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (138320) - No such process 00:04:25.898 ERROR: process (pid: 138320) is no longer running 00:04:25.898 03:18:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:25.898 03:18:03 -- common/autotest_common.sh@850 -- # return 1 00:04:25.898 03:18:03 -- common/autotest_common.sh@641 -- # es=1 00:04:25.898 03:18:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:25.898 03:18:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:25.898 03:18:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:25.898 03:18:03 -- event/cpu_locks.sh@54 -- # no_locks 00:04:25.898 03:18:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:25.898 03:18:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:25.898 03:18:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:25.898 00:04:25.898 real 0m1.807s 00:04:25.898 user 0m1.934s 00:04:25.898 sys 0m0.594s 00:04:25.898 03:18:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.898 03:18:03 -- common/autotest_common.sh@10 -- # set +x 00:04:25.898 ************************************ 00:04:25.898 END TEST default_locks 00:04:25.898 ************************************ 00:04:26.156 03:18:03 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:26.156 03:18:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.156 03:18:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.156 03:18:03 -- common/autotest_common.sh@10 -- # set +x 00:04:26.156 ************************************ 00:04:26.156 START TEST default_locks_via_rpc 00:04:26.156 ************************************ 00:04:26.156 03:18:03 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:26.156 03:18:03 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=138639 00:04:26.156 03:18:03 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.156 03:18:03 -- event/cpu_locks.sh@63 -- # waitforlisten 138639 00:04:26.156 03:18:03 -- common/autotest_common.sh@817 -- # '[' -z 138639 ']' 00:04:26.156 03:18:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.156 03:18:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:26.156 03:18:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.156 03:18:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:26.156 03:18:03 -- common/autotest_common.sh@10 -- # set +x 00:04:26.156 [2024-04-19 03:18:03.622665] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:26.156 [2024-04-19 03:18:03.622762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138639 ] 00:04:26.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.156 [2024-04-19 03:18:03.679886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.416 [2024-04-19 03:18:03.789712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.675 03:18:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:26.675 03:18:04 -- common/autotest_common.sh@850 -- # return 0 00:04:26.675 03:18:04 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:26.675 03:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:26.675 03:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:26.675 03:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:26.675 03:18:04 -- event/cpu_locks.sh@67 -- # no_locks 00:04:26.675 03:18:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:26.675 03:18:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:26.675 03:18:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:26.675 03:18:04 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:26.675 03:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:26.675 03:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:26.675 03:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:26.675 03:18:04 -- event/cpu_locks.sh@71 -- # locks_exist 138639 00:04:26.675 03:18:04 -- event/cpu_locks.sh@22 -- # lslocks -p 138639 00:04:26.675 03:18:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:26.935 03:18:04 -- event/cpu_locks.sh@73 -- # killprocess 138639 00:04:26.935 03:18:04 -- common/autotest_common.sh@936 -- # '[' -z 138639 ']' 00:04:26.935 03:18:04 -- common/autotest_common.sh@940 -- # kill -0 138639 00:04:26.935 03:18:04 -- common/autotest_common.sh@941 -- # uname 00:04:26.935 03:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.935 03:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138639 00:04:26.935 03:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.935 03:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.935 03:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138639' 00:04:26.935 killing process with pid 138639 00:04:26.935 03:18:04 -- common/autotest_common.sh@955 -- # kill 138639 00:04:26.935 03:18:04 -- common/autotest_common.sh@960 -- # wait 138639 00:04:27.506 00:04:27.506 real 0m1.210s 00:04:27.506 user 0m1.125s 00:04:27.506 sys 0m0.530s 00:04:27.506 03:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.506 03:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:27.506 ************************************ 00:04:27.506 END TEST default_locks_via_rpc 00:04:27.506 ************************************ 00:04:27.506 03:18:04 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:27.506 03:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.506 03:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.506 03:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:27.506 ************************************ 00:04:27.506 START TEST non_locking_app_on_locked_coremask 00:04:27.506 ************************************ 00:04:27.506 03:18:04 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:27.506 03:18:04 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=138815 00:04:27.506 03:18:04 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.506 03:18:04 -- event/cpu_locks.sh@81 -- # waitforlisten 138815 /var/tmp/spdk.sock 00:04:27.506 03:18:04 -- common/autotest_common.sh@817 -- # '[' -z 138815 ']' 00:04:27.506 03:18:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.506 03:18:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.506 03:18:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.506 03:18:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.506 03:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:27.506 [2024-04-19 03:18:04.955207] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:27.506 [2024-04-19 03:18:04.955293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138815 ] 00:04:27.506 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.506 [2024-04-19 03:18:05.012321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.765 [2024-04-19 03:18:05.124459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.023 03:18:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.023 03:18:05 -- common/autotest_common.sh@850 -- # return 0 00:04:28.023 03:18:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=138934 00:04:28.023 03:18:05 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:28.023 03:18:05 -- event/cpu_locks.sh@85 -- # waitforlisten 138934 /var/tmp/spdk2.sock 00:04:28.023 03:18:05 -- common/autotest_common.sh@817 -- # '[' -z 138934 ']' 00:04:28.023 03:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:28.023 03:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:28.023 03:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:28.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:28.023 03:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:28.023 03:18:05 -- common/autotest_common.sh@10 -- # set +x 00:04:28.023 [2024-04-19 03:18:05.430944] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:28.024 [2024-04-19 03:18:05.431026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138934 ] 00:04:28.024 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.024 [2024-04-19 03:18:05.524196] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:28.024 [2024-04-19 03:18:05.524229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.283 [2024-04-19 03:18:05.763183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.851 03:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.851 03:18:06 -- common/autotest_common.sh@850 -- # return 0 00:04:28.851 03:18:06 -- event/cpu_locks.sh@87 -- # locks_exist 138815 00:04:28.851 03:18:06 -- event/cpu_locks.sh@22 -- # lslocks -p 138815 00:04:28.851 03:18:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.418 lslocks: write error 00:04:29.418 03:18:06 -- event/cpu_locks.sh@89 -- # killprocess 138815 00:04:29.418 03:18:06 -- common/autotest_common.sh@936 -- # '[' -z 138815 ']' 00:04:29.418 03:18:06 -- common/autotest_common.sh@940 -- # kill -0 138815 00:04:29.418 03:18:06 -- common/autotest_common.sh@941 -- # uname 00:04:29.418 03:18:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.418 03:18:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138815 00:04:29.418 03:18:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.419 03:18:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.419 03:18:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138815' 00:04:29.419 killing process with pid 138815 00:04:29.419 03:18:06 -- common/autotest_common.sh@955 -- # kill 138815 00:04:29.419 03:18:06 -- common/autotest_common.sh@960 -- # wait 138815 00:04:30.354 03:18:07 -- event/cpu_locks.sh@90 -- # killprocess 138934 00:04:30.354 03:18:07 -- common/autotest_common.sh@936 -- # '[' -z 138934 ']' 00:04:30.354 03:18:07 -- common/autotest_common.sh@940 -- # kill -0 138934 00:04:30.354 03:18:07 -- common/autotest_common.sh@941 -- # uname 00:04:30.354 03:18:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:30.354 03:18:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138934 00:04:30.354 03:18:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:30.354 03:18:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:30.354 03:18:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138934' 00:04:30.354 killing process with pid 138934 00:04:30.354 03:18:07 -- common/autotest_common.sh@955 -- # kill 138934 00:04:30.354 03:18:07 -- common/autotest_common.sh@960 -- # wait 138934 00:04:30.924 00:04:30.924 real 0m3.276s 00:04:30.924 user 0m3.389s 00:04:30.924 sys 0m1.062s 00:04:30.924 03:18:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.924 03:18:08 -- common/autotest_common.sh@10 -- # set +x 00:04:30.924 ************************************ 00:04:30.924 END TEST non_locking_app_on_locked_coremask 00:04:30.924 ************************************ 00:04:30.924 03:18:08 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:30.924 03:18:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.924 03:18:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.924 03:18:08 -- common/autotest_common.sh@10 -- # set +x 00:04:30.924 ************************************ 00:04:30.924 START TEST locking_app_on_unlocked_coremask 00:04:30.924 ************************************ 00:04:30.924 03:18:08 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:04:30.924 03:18:08 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=139612 00:04:30.924 03:18:08 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:30.924 03:18:08 -- event/cpu_locks.sh@99 -- # waitforlisten 139612 /var/tmp/spdk.sock 00:04:30.924 03:18:08 -- common/autotest_common.sh@817 -- # '[' -z 139612 ']' 00:04:30.924 03:18:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.924 03:18:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:30.924 03:18:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.924 03:18:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:30.924 03:18:08 -- common/autotest_common.sh@10 -- # set +x 00:04:30.924 [2024-04-19 03:18:08.354156] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:30.924 [2024-04-19 03:18:08.354243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139612 ] 00:04:30.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.924 [2024-04-19 03:18:08.415054] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:30.924 [2024-04-19 03:18:08.415095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.183 [2024-04-19 03:18:08.536166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.441 03:18:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:31.441 03:18:08 -- common/autotest_common.sh@850 -- # return 0 00:04:31.441 03:18:08 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=139792 00:04:31.441 03:18:08 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:31.441 03:18:08 -- event/cpu_locks.sh@103 -- # waitforlisten 139792 /var/tmp/spdk2.sock 00:04:31.441 03:18:08 -- common/autotest_common.sh@817 -- # '[' -z 139792 ']' 00:04:31.441 03:18:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.441 03:18:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:31.441 03:18:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.441 03:18:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:31.441 03:18:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.441 [2024-04-19 03:18:08.844415] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:31.441 [2024-04-19 03:18:08.844491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139792 ] 00:04:31.441 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.441 [2024-04-19 03:18:08.938782] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.699 [2024-04-19 03:18:09.179693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.269 03:18:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:32.269 03:18:09 -- common/autotest_common.sh@850 -- # return 0 00:04:32.269 03:18:09 -- event/cpu_locks.sh@105 -- # locks_exist 139792 00:04:32.269 03:18:09 -- event/cpu_locks.sh@22 -- # lslocks -p 139792 00:04:32.269 03:18:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.837 lslocks: write error 00:04:32.837 03:18:10 -- event/cpu_locks.sh@107 -- # killprocess 139612 00:04:32.837 03:18:10 -- common/autotest_common.sh@936 -- # '[' -z 139612 ']' 00:04:32.837 03:18:10 -- common/autotest_common.sh@940 -- # kill -0 139612 00:04:32.837 03:18:10 -- common/autotest_common.sh@941 -- # uname 00:04:32.837 03:18:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:32.837 03:18:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139612 00:04:32.837 03:18:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:32.837 03:18:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:32.837 03:18:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139612' 00:04:32.837 killing process with pid 139612 00:04:32.837 03:18:10 -- common/autotest_common.sh@955 -- # kill 139612 00:04:32.837 03:18:10 -- common/autotest_common.sh@960 -- # wait 139612 00:04:33.808 03:18:11 -- event/cpu_locks.sh@108 -- # killprocess 139792 00:04:33.808 03:18:11 -- common/autotest_common.sh@936 -- # '[' -z 139792 ']' 00:04:33.808 03:18:11 -- common/autotest_common.sh@940 -- # kill -0 139792 00:04:33.808 03:18:11 -- common/autotest_common.sh@941 -- # uname 00:04:33.808 03:18:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.808 03:18:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139792 00:04:33.808 03:18:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.808 03:18:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.808 03:18:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139792' 00:04:33.808 killing process with pid 139792 00:04:33.808 03:18:11 -- common/autotest_common.sh@955 -- # kill 139792 00:04:33.808 03:18:11 -- common/autotest_common.sh@960 -- # wait 139792 00:04:34.404 00:04:34.404 real 0m3.477s 00:04:34.404 user 0m3.647s 00:04:34.404 sys 0m1.086s 00:04:34.404 03:18:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.404 03:18:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.404 ************************************ 00:04:34.404 END TEST locking_app_on_unlocked_coremask 00:04:34.404 ************************************ 00:04:34.404 03:18:11 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:34.404 03:18:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.404 03:18:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.404 03:18:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.404 ************************************ 00:04:34.404 START TEST locking_app_on_locked_coremask 00:04:34.404 ************************************ 00:04:34.405 03:18:11 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:04:34.405 03:18:11 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=140201 00:04:34.405 03:18:11 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.405 03:18:11 -- event/cpu_locks.sh@116 -- # waitforlisten 140201 /var/tmp/spdk.sock 00:04:34.405 03:18:11 -- common/autotest_common.sh@817 -- # '[' -z 140201 ']' 00:04:34.405 03:18:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.405 03:18:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.405 03:18:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.405 03:18:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.405 03:18:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.405 [2024-04-19 03:18:11.960123] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:34.405 [2024-04-19 03:18:11.960219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140201 ] 00:04:34.664 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.664 [2024-04-19 03:18:12.025853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.664 [2024-04-19 03:18:12.142287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.603 03:18:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:35.603 03:18:12 -- common/autotest_common.sh@850 -- # return 0 00:04:35.603 03:18:12 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=140337 00:04:35.603 03:18:12 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:35.603 03:18:12 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 140337 /var/tmp/spdk2.sock 00:04:35.603 03:18:12 -- common/autotest_common.sh@638 -- # local es=0 00:04:35.603 03:18:12 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 140337 /var/tmp/spdk2.sock 00:04:35.603 03:18:12 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:35.603 03:18:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.603 03:18:12 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:35.603 03:18:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.603 03:18:12 -- common/autotest_common.sh@641 -- # waitforlisten 140337 /var/tmp/spdk2.sock 00:04:35.603 03:18:12 -- common/autotest_common.sh@817 -- # '[' -z 140337 ']' 00:04:35.603 03:18:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.603 03:18:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:35.603 03:18:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.603 03:18:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:35.603 03:18:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.603 [2024-04-19 03:18:12.936914] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:35.603 [2024-04-19 03:18:12.937015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140337 ] 00:04:35.603 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.603 [2024-04-19 03:18:13.032890] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 140201 has claimed it. 00:04:35.603 [2024-04-19 03:18:13.032958] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:36.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (140337) - No such process 00:04:36.168 ERROR: process (pid: 140337) is no longer running 00:04:36.168 03:18:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:36.168 03:18:13 -- common/autotest_common.sh@850 -- # return 1 00:04:36.168 03:18:13 -- common/autotest_common.sh@641 -- # es=1 00:04:36.168 03:18:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:36.168 03:18:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:36.168 03:18:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:36.168 03:18:13 -- event/cpu_locks.sh@122 -- # locks_exist 140201 00:04:36.168 03:18:13 -- event/cpu_locks.sh@22 -- # lslocks -p 140201 00:04:36.168 03:18:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.736 lslocks: write error 00:04:36.736 03:18:14 -- event/cpu_locks.sh@124 -- # killprocess 140201 00:04:36.736 03:18:14 -- common/autotest_common.sh@936 -- # '[' -z 140201 ']' 00:04:36.736 03:18:14 -- common/autotest_common.sh@940 -- # kill -0 140201 00:04:36.736 03:18:14 -- common/autotest_common.sh@941 -- # uname 00:04:36.736 03:18:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.736 03:18:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140201 00:04:36.736 03:18:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.736 03:18:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.736 03:18:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140201' 00:04:36.736 killing process with pid 140201 00:04:36.736 03:18:14 -- common/autotest_common.sh@955 -- # kill 140201 00:04:36.736 03:18:14 -- common/autotest_common.sh@960 -- # wait 140201 00:04:36.993 00:04:36.993 real 0m2.620s 00:04:36.993 user 0m2.975s 00:04:36.993 sys 0m0.686s 00:04:36.993 03:18:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.993 03:18:14 -- common/autotest_common.sh@10 -- # set +x 00:04:36.993 ************************************ 00:04:36.993 END TEST locking_app_on_locked_coremask 00:04:36.993 ************************************ 00:04:36.993 03:18:14 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:36.993 03:18:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.993 03:18:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.993 03:18:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.252 ************************************ 00:04:37.252 START TEST locking_overlapped_coremask 00:04:37.252 ************************************ 00:04:37.252 03:18:14 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:04:37.252 03:18:14 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=140636 00:04:37.252 03:18:14 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:37.252 03:18:14 -- event/cpu_locks.sh@133 -- # waitforlisten 140636 /var/tmp/spdk.sock 00:04:37.252 03:18:14 -- common/autotest_common.sh@817 -- # '[' -z 140636 ']' 00:04:37.252 03:18:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.252 03:18:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.252 03:18:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.252 03:18:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.252 03:18:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.252 [2024-04-19 03:18:14.692157] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:37.252 [2024-04-19 03:18:14.692250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140636 ] 00:04:37.252 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.252 [2024-04-19 03:18:14.749325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:37.513 [2024-04-19 03:18:14.862126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.513 [2024-04-19 03:18:14.862189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.513 [2024-04-19 03:18:14.862191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.771 03:18:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.771 03:18:15 -- common/autotest_common.sh@850 -- # return 0 00:04:37.771 03:18:15 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=140647 00:04:37.771 03:18:15 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 140647 /var/tmp/spdk2.sock 00:04:37.771 03:18:15 -- common/autotest_common.sh@638 -- # local es=0 00:04:37.771 03:18:15 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 140647 /var/tmp/spdk2.sock 00:04:37.771 03:18:15 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:37.771 03:18:15 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:37.771 03:18:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.771 03:18:15 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:37.771 03:18:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.772 03:18:15 -- common/autotest_common.sh@641 -- # waitforlisten 140647 /var/tmp/spdk2.sock 00:04:37.772 03:18:15 -- common/autotest_common.sh@817 -- # '[' -z 140647 ']' 00:04:37.772 03:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.772 03:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.772 03:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.772 03:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.772 03:18:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.772 [2024-04-19 03:18:15.166230] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:37.772 [2024-04-19 03:18:15.166326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140647 ] 00:04:37.772 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.772 [2024-04-19 03:18:15.251680] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 140636 has claimed it. 00:04:37.772 [2024-04-19 03:18:15.251752] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:38.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (140647) - No such process 00:04:38.340 ERROR: process (pid: 140647) is no longer running 00:04:38.340 03:18:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.340 03:18:15 -- common/autotest_common.sh@850 -- # return 1 00:04:38.340 03:18:15 -- common/autotest_common.sh@641 -- # es=1 00:04:38.340 03:18:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:38.340 03:18:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:38.340 03:18:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:38.340 03:18:15 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:38.340 03:18:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:38.340 03:18:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:38.340 03:18:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:38.340 03:18:15 -- event/cpu_locks.sh@141 -- # killprocess 140636 00:04:38.340 03:18:15 -- common/autotest_common.sh@936 -- # '[' -z 140636 ']' 00:04:38.340 03:18:15 -- common/autotest_common.sh@940 -- # kill -0 140636 00:04:38.340 03:18:15 -- common/autotest_common.sh@941 -- # uname 00:04:38.340 03:18:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.340 03:18:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140636 00:04:38.340 03:18:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.340 03:18:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.340 03:18:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140636' 00:04:38.340 killing process with pid 140636 00:04:38.340 03:18:15 -- common/autotest_common.sh@955 -- # kill 140636 00:04:38.340 03:18:15 -- common/autotest_common.sh@960 -- # wait 140636 00:04:38.908 00:04:38.908 real 0m1.709s 00:04:38.908 user 0m4.520s 00:04:38.908 sys 0m0.454s 00:04:38.908 03:18:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:38.908 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:04:38.908 ************************************ 00:04:38.908 END TEST locking_overlapped_coremask 00:04:38.908 ************************************ 00:04:38.908 03:18:16 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:38.908 03:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.908 03:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.908 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 ************************************ 00:04:39.168 START TEST locking_overlapped_coremask_via_rpc 00:04:39.168 ************************************ 00:04:39.168 03:18:16 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:04:39.168 03:18:16 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=140893 00:04:39.168 03:18:16 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:39.168 03:18:16 -- event/cpu_locks.sh@149 -- # waitforlisten 140893 /var/tmp/spdk.sock 00:04:39.168 03:18:16 -- common/autotest_common.sh@817 -- # '[' -z 140893 ']' 00:04:39.168 03:18:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.168 03:18:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.168 03:18:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.168 03:18:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.168 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 [2024-04-19 03:18:16.533210] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:39.168 [2024-04-19 03:18:16.533287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140893 ] 00:04:39.168 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.168 [2024-04-19 03:18:16.590289] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.168 [2024-04-19 03:18:16.590326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:39.168 [2024-04-19 03:18:16.701789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.168 [2024-04-19 03:18:16.701851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.168 [2024-04-19 03:18:16.701854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.426 03:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.426 03:18:16 -- common/autotest_common.sh@850 -- # return 0 00:04:39.426 03:18:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=140950 00:04:39.426 03:18:16 -- event/cpu_locks.sh@153 -- # waitforlisten 140950 /var/tmp/spdk2.sock 00:04:39.426 03:18:16 -- common/autotest_common.sh@817 -- # '[' -z 140950 ']' 00:04:39.427 03:18:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.427 03:18:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.427 03:18:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.427 03:18:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.427 03:18:16 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:39.427 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.686 [2024-04-19 03:18:16.998905] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:39.686 [2024-04-19 03:18:16.999004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140950 ] 00:04:39.686 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.686 [2024-04-19 03:18:17.089270] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.686 [2024-04-19 03:18:17.089306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:39.946 [2024-04-19 03:18:17.311644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.946 [2024-04-19 03:18:17.311704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:39.946 [2024-04-19 03:18:17.311706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.514 03:18:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.514 03:18:17 -- common/autotest_common.sh@850 -- # return 0 00:04:40.514 03:18:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:40.514 03:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:40.514 03:18:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.514 03:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:40.514 03:18:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:40.514 03:18:17 -- common/autotest_common.sh@638 -- # local es=0 00:04:40.514 03:18:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:40.514 03:18:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:40.514 03:18:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:40.514 03:18:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:40.514 03:18:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:40.514 03:18:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:40.514 03:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:40.514 03:18:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.514 [2024-04-19 03:18:17.916473] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 140893 has claimed it. 00:04:40.514 request: 00:04:40.514 { 00:04:40.514 "method": "framework_enable_cpumask_locks", 00:04:40.514 "req_id": 1 00:04:40.514 } 00:04:40.514 Got JSON-RPC error response 00:04:40.514 response: 00:04:40.514 { 00:04:40.514 "code": -32603, 00:04:40.514 "message": "Failed to claim CPU core: 2" 00:04:40.514 } 00:04:40.514 03:18:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:40.514 03:18:17 -- common/autotest_common.sh@641 -- # es=1 00:04:40.514 03:18:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:40.514 03:18:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:40.514 03:18:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:40.514 03:18:17 -- event/cpu_locks.sh@158 -- # waitforlisten 140893 /var/tmp/spdk.sock 00:04:40.514 03:18:17 -- common/autotest_common.sh@817 -- # '[' -z 140893 ']' 00:04:40.514 03:18:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.514 03:18:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.514 03:18:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.514 03:18:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.514 03:18:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.772 03:18:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.772 03:18:18 -- common/autotest_common.sh@850 -- # return 0 00:04:40.772 03:18:18 -- event/cpu_locks.sh@159 -- # waitforlisten 140950 /var/tmp/spdk2.sock 00:04:40.772 03:18:18 -- common/autotest_common.sh@817 -- # '[' -z 140950 ']' 00:04:40.772 03:18:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.772 03:18:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.772 03:18:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.772 03:18:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.772 03:18:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.032 03:18:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.032 03:18:18 -- common/autotest_common.sh@850 -- # return 0 00:04:41.032 03:18:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:41.032 03:18:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.032 03:18:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.032 03:18:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.032 00:04:41.032 real 0m1.926s 00:04:41.032 user 0m0.980s 00:04:41.032 sys 0m0.172s 00:04:41.032 03:18:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.032 03:18:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.032 ************************************ 00:04:41.032 END TEST locking_overlapped_coremask_via_rpc 00:04:41.032 ************************************ 00:04:41.032 03:18:18 -- event/cpu_locks.sh@174 -- # cleanup 00:04:41.032 03:18:18 -- event/cpu_locks.sh@15 -- # [[ -z 140893 ]] 00:04:41.032 03:18:18 -- event/cpu_locks.sh@15 -- # killprocess 140893 00:04:41.032 03:18:18 -- common/autotest_common.sh@936 -- # '[' -z 140893 ']' 00:04:41.032 03:18:18 -- common/autotest_common.sh@940 -- # kill -0 140893 00:04:41.032 03:18:18 -- common/autotest_common.sh@941 -- # uname 00:04:41.032 03:18:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.032 03:18:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140893 00:04:41.032 03:18:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.032 03:18:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.032 03:18:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140893' 00:04:41.032 killing process with pid 140893 00:04:41.032 03:18:18 -- common/autotest_common.sh@955 -- # kill 140893 00:04:41.032 03:18:18 -- common/autotest_common.sh@960 -- # wait 140893 00:04:41.602 03:18:18 -- event/cpu_locks.sh@16 -- # [[ -z 140950 ]] 00:04:41.602 03:18:18 -- event/cpu_locks.sh@16 -- # killprocess 140950 00:04:41.602 03:18:18 -- common/autotest_common.sh@936 -- # '[' -z 140950 ']' 00:04:41.602 03:18:18 -- common/autotest_common.sh@940 -- # kill -0 140950 00:04:41.602 03:18:18 -- common/autotest_common.sh@941 -- # uname 00:04:41.602 03:18:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.602 03:18:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140950 00:04:41.602 03:18:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:41.602 03:18:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:41.602 03:18:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140950' 00:04:41.602 killing process with pid 140950 00:04:41.602 03:18:18 -- common/autotest_common.sh@955 -- # kill 140950 00:04:41.602 03:18:18 -- common/autotest_common.sh@960 -- # wait 140950 00:04:41.862 03:18:19 -- event/cpu_locks.sh@18 -- # rm -f 00:04:41.862 03:18:19 -- event/cpu_locks.sh@1 -- # cleanup 00:04:41.862 03:18:19 -- event/cpu_locks.sh@15 -- # [[ -z 140893 ]] 00:04:41.862 03:18:19 -- event/cpu_locks.sh@15 -- # killprocess 140893 00:04:41.862 03:18:19 -- common/autotest_common.sh@936 -- # '[' -z 140893 ']' 00:04:41.862 03:18:19 -- common/autotest_common.sh@940 -- # kill -0 140893 00:04:41.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (140893) - No such process 00:04:41.862 03:18:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 140893 is not found' 00:04:41.862 Process with pid 140893 is not found 00:04:41.862 03:18:19 -- event/cpu_locks.sh@16 -- # [[ -z 140950 ]] 00:04:41.862 03:18:19 -- event/cpu_locks.sh@16 -- # killprocess 140950 00:04:41.862 03:18:19 -- common/autotest_common.sh@936 -- # '[' -z 140950 ']' 00:04:41.862 03:18:19 -- common/autotest_common.sh@940 -- # kill -0 140950 00:04:41.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (140950) - No such process 00:04:41.862 03:18:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 140950 is not found' 00:04:41.862 Process with pid 140950 is not found 00:04:41.862 03:18:19 -- event/cpu_locks.sh@18 -- # rm -f 00:04:41.862 00:04:41.862 real 0m17.897s 00:04:41.862 user 0m29.399s 00:04:41.862 sys 0m5.752s 00:04:41.862 03:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.862 03:18:19 -- common/autotest_common.sh@10 -- # set +x 00:04:41.862 ************************************ 00:04:41.862 END TEST cpu_locks 00:04:41.862 ************************************ 00:04:41.862 00:04:41.862 real 0m43.790s 00:04:41.862 user 1m20.008s 00:04:41.862 sys 0m10.054s 00:04:41.862 03:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.862 03:18:19 -- common/autotest_common.sh@10 -- # set +x 00:04:41.862 ************************************ 00:04:41.862 END TEST event 00:04:41.862 ************************************ 00:04:42.120 03:18:19 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.120 03:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.120 03:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.120 03:18:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.120 ************************************ 00:04:42.120 START TEST thread 00:04:42.120 ************************************ 00:04:42.120 03:18:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.120 * Looking for test storage... 00:04:42.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:42.120 03:18:19 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.120 03:18:19 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:42.120 03:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.120 03:18:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.120 ************************************ 00:04:42.120 START TEST thread_poller_perf 00:04:42.120 ************************************ 00:04:42.120 03:18:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.379 [2024-04-19 03:18:19.684866] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:42.379 [2024-04-19 03:18:19.684931] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141332 ] 00:04:42.379 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.379 [2024-04-19 03:18:19.744953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.379 [2024-04-19 03:18:19.860878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.379 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:43.757 ====================================== 00:04:43.757 busy:2715920387 (cyc) 00:04:43.757 total_run_count: 292000 00:04:43.757 tsc_hz: 2700000000 (cyc) 00:04:43.757 ====================================== 00:04:43.757 poller_cost: 9301 (cyc), 3444 (nsec) 00:04:43.757 00:04:43.757 real 0m1.325s 00:04:43.757 user 0m1.228s 00:04:43.757 sys 0m0.091s 00:04:43.757 03:18:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.757 03:18:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.757 ************************************ 00:04:43.757 END TEST thread_poller_perf 00:04:43.757 ************************************ 00:04:43.757 03:18:21 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:43.757 03:18:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:43.757 03:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.757 03:18:21 -- common/autotest_common.sh@10 -- # set +x 00:04:43.757 ************************************ 00:04:43.757 START TEST thread_poller_perf 00:04:43.757 ************************************ 00:04:43.757 03:18:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:43.757 [2024-04-19 03:18:21.130238] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:43.757 [2024-04-19 03:18:21.130310] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141615 ] 00:04:43.757 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.757 [2024-04-19 03:18:21.195501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.757 [2024-04-19 03:18:21.314301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.757 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:45.135 ====================================== 00:04:45.135 busy:2703038420 (cyc) 00:04:45.135 total_run_count: 3856000 00:04:45.135 tsc_hz: 2700000000 (cyc) 00:04:45.135 ====================================== 00:04:45.135 poller_cost: 700 (cyc), 259 (nsec) 00:04:45.135 00:04:45.135 real 0m1.323s 00:04:45.135 user 0m1.239s 00:04:45.136 sys 0m0.077s 00:04:45.136 03:18:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.136 03:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.136 ************************************ 00:04:45.136 END TEST thread_poller_perf 00:04:45.136 ************************************ 00:04:45.136 03:18:22 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:45.136 00:04:45.136 real 0m2.946s 00:04:45.136 user 0m2.563s 00:04:45.136 sys 0m0.358s 00:04:45.136 03:18:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.136 03:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.136 ************************************ 00:04:45.136 END TEST thread 00:04:45.136 ************************************ 00:04:45.136 03:18:22 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:45.136 03:18:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.136 03:18:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.136 03:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.136 ************************************ 00:04:45.136 START TEST accel 00:04:45.136 ************************************ 00:04:45.136 03:18:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:45.136 * Looking for test storage... 00:04:45.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:45.136 03:18:22 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:45.136 03:18:22 -- accel/accel.sh@82 -- # get_expected_opcs 00:04:45.136 03:18:22 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.136 03:18:22 -- accel/accel.sh@62 -- # spdk_tgt_pid=141821 00:04:45.136 03:18:22 -- accel/accel.sh@63 -- # waitforlisten 141821 00:04:45.136 03:18:22 -- common/autotest_common.sh@817 -- # '[' -z 141821 ']' 00:04:45.136 03:18:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.136 03:18:22 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:45.136 03:18:22 -- accel/accel.sh@61 -- # build_accel_config 00:04:45.136 03:18:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.136 03:18:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:45.136 03:18:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.136 03:18:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:45.136 03:18:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.136 03:18:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.136 03:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.136 03:18:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.136 03:18:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:45.136 03:18:22 -- accel/accel.sh@40 -- # local IFS=, 00:04:45.136 03:18:22 -- accel/accel.sh@41 -- # jq -r . 00:04:45.136 [2024-04-19 03:18:22.664918] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:45.136 [2024-04-19 03:18:22.664992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141821 ] 00:04:45.136 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.395 [2024-04-19 03:18:22.725251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.395 [2024-04-19 03:18:22.841158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.333 03:18:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.333 03:18:23 -- common/autotest_common.sh@850 -- # return 0 00:04:46.333 03:18:23 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:46.333 03:18:23 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:46.333 03:18:23 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:46.333 03:18:23 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:46.333 03:18:23 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:46.333 03:18:23 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:46.333 03:18:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:46.333 03:18:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.333 03:18:23 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:46.333 03:18:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # IFS== 00:04:46.333 03:18:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:46.333 03:18:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:46.333 03:18:23 -- accel/accel.sh@75 -- # killprocess 141821 00:04:46.333 03:18:23 -- common/autotest_common.sh@936 -- # '[' -z 141821 ']' 00:04:46.333 03:18:23 -- common/autotest_common.sh@940 -- # kill -0 141821 00:04:46.333 03:18:23 -- common/autotest_common.sh@941 -- # uname 00:04:46.333 03:18:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.333 03:18:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141821 00:04:46.333 03:18:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.333 03:18:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.333 03:18:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141821' 00:04:46.333 killing process with pid 141821 00:04:46.333 03:18:23 -- common/autotest_common.sh@955 -- # kill 141821 00:04:46.333 03:18:23 -- common/autotest_common.sh@960 -- # wait 141821 00:04:46.592 03:18:24 -- accel/accel.sh@76 -- # trap - ERR 00:04:46.592 03:18:24 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:46.592 03:18:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:46.592 03:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.592 03:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.851 03:18:24 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:04:46.851 03:18:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:46.851 03:18:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:46.851 03:18:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:46.851 03:18:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:46.851 03:18:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.851 03:18:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.851 03:18:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:46.851 03:18:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:46.851 03:18:24 -- accel/accel.sh@41 -- # jq -r . 00:04:46.851 03:18:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.851 03:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.851 03:18:24 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:46.851 03:18:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:46.851 03:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.851 03:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.851 ************************************ 00:04:46.851 START TEST accel_missing_filename 00:04:46.851 ************************************ 00:04:46.851 03:18:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:04:46.852 03:18:24 -- common/autotest_common.sh@638 -- # local es=0 00:04:46.852 03:18:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:46.852 03:18:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:46.852 03:18:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.852 03:18:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:46.852 03:18:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.852 03:18:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:04:46.852 03:18:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:46.852 03:18:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:46.852 03:18:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:46.852 03:18:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:46.852 03:18:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.852 03:18:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.852 03:18:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:46.852 03:18:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:46.852 03:18:24 -- accel/accel.sh@41 -- # jq -r . 00:04:46.852 [2024-04-19 03:18:24.383949] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:46.852 [2024-04-19 03:18:24.384017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142040 ] 00:04:47.111 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.111 [2024-04-19 03:18:24.446350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.111 [2024-04-19 03:18:24.564062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.112 [2024-04-19 03:18:24.623709] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.372 [2024-04-19 03:18:24.711956] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:47.372 A filename is required. 00:04:47.372 03:18:24 -- common/autotest_common.sh@641 -- # es=234 00:04:47.372 03:18:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:47.372 03:18:24 -- common/autotest_common.sh@650 -- # es=106 00:04:47.372 03:18:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:47.372 03:18:24 -- common/autotest_common.sh@658 -- # es=1 00:04:47.372 03:18:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:47.372 00:04:47.372 real 0m0.464s 00:04:47.372 user 0m0.354s 00:04:47.372 sys 0m0.142s 00:04:47.372 03:18:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.372 03:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:47.372 ************************************ 00:04:47.372 END TEST accel_missing_filename 00:04:47.372 ************************************ 00:04:47.372 03:18:24 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:47.372 03:18:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:47.372 03:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.372 03:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:47.633 ************************************ 00:04:47.633 START TEST accel_compress_verify 00:04:47.633 ************************************ 00:04:47.633 03:18:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:47.633 03:18:24 -- common/autotest_common.sh@638 -- # local es=0 00:04:47.633 03:18:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:47.633 03:18:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:47.633 03:18:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:47.633 03:18:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:47.633 03:18:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:47.633 03:18:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:47.633 03:18:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:47.633 03:18:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:47.633 03:18:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:47.633 03:18:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:47.633 03:18:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.633 03:18:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.633 03:18:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:47.633 03:18:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:47.633 03:18:24 -- accel/accel.sh@41 -- # jq -r . 00:04:47.633 [2024-04-19 03:18:24.961541] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:47.633 [2024-04-19 03:18:24.961605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142166 ] 00:04:47.633 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.633 [2024-04-19 03:18:25.024169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.633 [2024-04-19 03:18:25.144722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.893 [2024-04-19 03:18:25.206779] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.893 [2024-04-19 03:18:25.295707] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:47.893 00:04:47.893 Compression does not support the verify option, aborting. 00:04:47.893 03:18:25 -- common/autotest_common.sh@641 -- # es=161 00:04:47.893 03:18:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:47.893 03:18:25 -- common/autotest_common.sh@650 -- # es=33 00:04:47.893 03:18:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:47.893 03:18:25 -- common/autotest_common.sh@658 -- # es=1 00:04:47.893 03:18:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:47.893 00:04:47.893 real 0m0.476s 00:04:47.893 user 0m0.367s 00:04:47.893 sys 0m0.143s 00:04:47.893 03:18:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.893 03:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:47.893 ************************************ 00:04:47.893 END TEST accel_compress_verify 00:04:47.893 ************************************ 00:04:47.893 03:18:25 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:47.893 03:18:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:47.893 03:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.893 03:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.152 ************************************ 00:04:48.152 START TEST accel_wrong_workload 00:04:48.152 ************************************ 00:04:48.152 03:18:25 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:04:48.152 03:18:25 -- common/autotest_common.sh@638 -- # local es=0 00:04:48.152 03:18:25 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:48.152 03:18:25 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:48.152 03:18:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:48.152 03:18:25 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:48.152 03:18:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:48.152 03:18:25 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:04:48.152 03:18:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:48.152 03:18:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:48.152 03:18:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.152 03:18:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.152 03:18:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.152 03:18:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.152 03:18:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.152 03:18:25 -- accel/accel.sh@40 -- # local IFS=, 00:04:48.152 03:18:25 -- accel/accel.sh@41 -- # jq -r . 00:04:48.152 Unsupported workload type: foobar 00:04:48.152 [2024-04-19 03:18:25.556531] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:48.152 accel_perf options: 00:04:48.152 [-h help message] 00:04:48.152 [-q queue depth per core] 00:04:48.152 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:48.152 [-T number of threads per core 00:04:48.152 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:48.152 [-t time in seconds] 00:04:48.152 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:48.152 [ dif_verify, , dif_generate, dif_generate_copy 00:04:48.152 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:48.152 [-l for compress/decompress workloads, name of uncompressed input file 00:04:48.152 [-S for crc32c workload, use this seed value (default 0) 00:04:48.152 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:48.152 [-f for fill workload, use this BYTE value (default 255) 00:04:48.152 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:48.152 [-y verify result if this switch is on] 00:04:48.152 [-a tasks to allocate per core (default: same value as -q)] 00:04:48.153 Can be used to spread operations across a wider range of memory. 00:04:48.153 03:18:25 -- common/autotest_common.sh@641 -- # es=1 00:04:48.153 03:18:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:48.153 03:18:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:48.153 03:18:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:48.153 00:04:48.153 real 0m0.022s 00:04:48.153 user 0m0.012s 00:04:48.153 sys 0m0.010s 00:04:48.153 03:18:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.153 03:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.153 ************************************ 00:04:48.153 END TEST accel_wrong_workload 00:04:48.153 ************************************ 00:04:48.153 Error: writing output failed: Broken pipe 00:04:48.153 03:18:25 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:48.153 03:18:25 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:48.153 03:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.153 03:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.153 ************************************ 00:04:48.153 START TEST accel_negative_buffers 00:04:48.153 ************************************ 00:04:48.153 03:18:25 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:48.153 03:18:25 -- common/autotest_common.sh@638 -- # local es=0 00:04:48.153 03:18:25 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:48.153 03:18:25 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:48.153 03:18:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:48.153 03:18:25 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:48.153 03:18:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:48.153 03:18:25 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:04:48.153 03:18:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:48.153 03:18:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:48.153 03:18:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.153 03:18:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.153 03:18:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.153 03:18:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.153 03:18:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.153 03:18:25 -- accel/accel.sh@40 -- # local IFS=, 00:04:48.153 03:18:25 -- accel/accel.sh@41 -- # jq -r . 00:04:48.153 -x option must be non-negative. 00:04:48.153 [2024-04-19 03:18:25.699335] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:48.153 accel_perf options: 00:04:48.153 [-h help message] 00:04:48.153 [-q queue depth per core] 00:04:48.153 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:48.153 [-T number of threads per core 00:04:48.153 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:48.153 [-t time in seconds] 00:04:48.153 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:48.153 [ dif_verify, , dif_generate, dif_generate_copy 00:04:48.153 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:48.153 [-l for compress/decompress workloads, name of uncompressed input file 00:04:48.153 [-S for crc32c workload, use this seed value (default 0) 00:04:48.153 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:48.153 [-f for fill workload, use this BYTE value (default 255) 00:04:48.153 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:48.153 [-y verify result if this switch is on] 00:04:48.153 [-a tasks to allocate per core (default: same value as -q)] 00:04:48.153 Can be used to spread operations across a wider range of memory. 00:04:48.153 03:18:25 -- common/autotest_common.sh@641 -- # es=1 00:04:48.153 03:18:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:48.153 03:18:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:48.153 03:18:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:48.153 00:04:48.153 real 0m0.023s 00:04:48.153 user 0m0.013s 00:04:48.153 sys 0m0.010s 00:04:48.153 03:18:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.153 03:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.153 ************************************ 00:04:48.153 END TEST accel_negative_buffers 00:04:48.153 ************************************ 00:04:48.411 Error: writing output failed: Broken pipe 00:04:48.411 03:18:25 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:48.411 03:18:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:48.411 03:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.411 03:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.411 ************************************ 00:04:48.411 START TEST accel_crc32c 00:04:48.411 ************************************ 00:04:48.411 03:18:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:48.411 03:18:25 -- accel/accel.sh@16 -- # local accel_opc 00:04:48.411 03:18:25 -- accel/accel.sh@17 -- # local accel_module 00:04:48.411 03:18:25 -- accel/accel.sh@19 -- # IFS=: 00:04:48.411 03:18:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:48.411 03:18:25 -- accel/accel.sh@19 -- # read -r var val 00:04:48.411 03:18:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:48.411 03:18:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:48.411 03:18:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.411 03:18:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.411 03:18:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.411 03:18:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.411 03:18:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.411 03:18:25 -- accel/accel.sh@40 -- # local IFS=, 00:04:48.411 03:18:25 -- accel/accel.sh@41 -- # jq -r . 00:04:48.411 [2024-04-19 03:18:25.828943] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:48.412 [2024-04-19 03:18:25.829007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142369 ] 00:04:48.412 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.412 [2024-04-19 03:18:25.891394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.671 [2024-04-19 03:18:26.009052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val=0x1 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val=crc32c 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val=32 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.671 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.671 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.671 03:18:26 -- accel/accel.sh@20 -- # val=software 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@22 -- # accel_module=software 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val=32 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val=32 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val=1 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val=Yes 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:48.672 03:18:26 -- accel/accel.sh@20 -- # val= 00:04:48.672 03:18:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # IFS=: 00:04:48.672 03:18:26 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.051 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.051 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.051 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.051 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.051 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.051 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:50.051 03:18:27 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:50.051 03:18:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:50.051 00:04:50.051 real 0m1.461s 00:04:50.051 user 0m1.317s 00:04:50.051 sys 0m0.146s 00:04:50.051 03:18:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.051 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.051 ************************************ 00:04:50.051 END TEST accel_crc32c 00:04:50.051 ************************************ 00:04:50.051 03:18:27 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:50.051 03:18:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:50.051 03:18:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.051 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.051 ************************************ 00:04:50.051 START TEST accel_crc32c_C2 00:04:50.051 ************************************ 00:04:50.051 03:18:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:50.051 03:18:27 -- accel/accel.sh@16 -- # local accel_opc 00:04:50.051 03:18:27 -- accel/accel.sh@17 -- # local accel_module 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.051 03:18:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:50.051 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.051 03:18:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:50.051 03:18:27 -- accel/accel.sh@12 -- # build_accel_config 00:04:50.051 03:18:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.051 03:18:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.051 03:18:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.051 03:18:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.051 03:18:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.051 03:18:27 -- accel/accel.sh@40 -- # local IFS=, 00:04:50.051 03:18:27 -- accel/accel.sh@41 -- # jq -r . 00:04:50.051 [2024-04-19 03:18:27.407543] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:50.051 [2024-04-19 03:18:27.407601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142540 ] 00:04:50.051 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.051 [2024-04-19 03:18:27.468524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.051 [2024-04-19 03:18:27.587736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val=0x1 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val=crc32c 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val=0 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.310 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.310 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.310 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val=software 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@22 -- # accel_module=software 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val=32 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val=32 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val=1 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val=Yes 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:50.311 03:18:27 -- accel/accel.sh@20 -- # val= 00:04:50.311 03:18:27 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # IFS=: 00:04:50.311 03:18:27 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:28 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:28 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:28 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:28 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:28 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:28 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:51.720 03:18:28 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:51.720 03:18:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:51.720 00:04:51.720 real 0m1.478s 00:04:51.720 user 0m1.335s 00:04:51.720 sys 0m0.144s 00:04:51.720 03:18:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.720 03:18:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.720 ************************************ 00:04:51.720 END TEST accel_crc32c_C2 00:04:51.720 ************************************ 00:04:51.720 03:18:28 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:51.720 03:18:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:51.720 03:18:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.720 03:18:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.720 ************************************ 00:04:51.720 START TEST accel_copy 00:04:51.720 ************************************ 00:04:51.720 03:18:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:04:51.720 03:18:28 -- accel/accel.sh@16 -- # local accel_opc 00:04:51.720 03:18:28 -- accel/accel.sh@17 -- # local accel_module 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:51.720 03:18:28 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:51.720 03:18:28 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.720 03:18:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.720 03:18:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.720 03:18:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.720 03:18:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.720 03:18:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.720 03:18:28 -- accel/accel.sh@40 -- # local IFS=, 00:04:51.720 03:18:28 -- accel/accel.sh@41 -- # jq -r . 00:04:51.720 [2024-04-19 03:18:29.003184] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:51.720 [2024-04-19 03:18:29.003252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142789 ] 00:04:51.720 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.720 [2024-04-19 03:18:29.066234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.720 [2024-04-19 03:18:29.185128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val=0x1 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val=copy 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@23 -- # accel_opc=copy 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val=software 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@22 -- # accel_module=software 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val=32 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val=32 00:04:51.720 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.720 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.720 03:18:29 -- accel/accel.sh@20 -- # val=1 00:04:51.721 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.721 03:18:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.721 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.721 03:18:29 -- accel/accel.sh@20 -- # val=Yes 00:04:51.721 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.721 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.721 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:51.721 03:18:29 -- accel/accel.sh@20 -- # val= 00:04:51.721 03:18:29 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # IFS=: 00:04:51.721 03:18:29 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.103 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.103 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.103 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.103 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.103 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.103 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:53.103 03:18:30 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:53.103 03:18:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:53.103 00:04:53.103 real 0m1.465s 00:04:53.103 user 0m1.321s 00:04:53.103 sys 0m0.145s 00:04:53.103 03:18:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.103 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.103 ************************************ 00:04:53.103 END TEST accel_copy 00:04:53.103 ************************************ 00:04:53.103 03:18:30 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:53.103 03:18:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:04:53.103 03:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.103 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.103 ************************************ 00:04:53.103 START TEST accel_fill 00:04:53.103 ************************************ 00:04:53.103 03:18:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:53.103 03:18:30 -- accel/accel.sh@16 -- # local accel_opc 00:04:53.103 03:18:30 -- accel/accel.sh@17 -- # local accel_module 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.103 03:18:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:53.103 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.103 03:18:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:53.103 03:18:30 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.103 03:18:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.103 03:18:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.103 03:18:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.103 03:18:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.103 03:18:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.103 03:18:30 -- accel/accel.sh@40 -- # local IFS=, 00:04:53.104 03:18:30 -- accel/accel.sh@41 -- # jq -r . 00:04:53.104 [2024-04-19 03:18:30.576796] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:53.104 [2024-04-19 03:18:30.576865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142990 ] 00:04:53.104 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.104 [2024-04-19 03:18:30.641519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.364 [2024-04-19 03:18:30.759645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val=0x1 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val=fill 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@23 -- # accel_opc=fill 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val=0x80 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val=software 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@22 -- # accel_module=software 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.364 03:18:30 -- accel/accel.sh@20 -- # val=64 00:04:53.364 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.364 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.365 03:18:30 -- accel/accel.sh@20 -- # val=64 00:04:53.365 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.365 03:18:30 -- accel/accel.sh@20 -- # val=1 00:04:53.365 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.365 03:18:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.365 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.365 03:18:30 -- accel/accel.sh@20 -- # val=Yes 00:04:53.365 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.365 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.365 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:53.365 03:18:30 -- accel/accel.sh@20 -- # val= 00:04:53.365 03:18:30 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # IFS=: 00:04:53.365 03:18:30 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:54.746 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:54.746 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:54.746 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:54.746 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:54.746 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:54.746 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.746 03:18:32 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:54.746 03:18:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.746 00:04:54.746 real 0m1.475s 00:04:54.746 user 0m1.323s 00:04:54.746 sys 0m0.154s 00:04:54.746 03:18:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.746 03:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.746 ************************************ 00:04:54.746 END TEST accel_fill 00:04:54.746 ************************************ 00:04:54.746 03:18:32 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:54.746 03:18:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:54.746 03:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.746 03:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.746 ************************************ 00:04:54.746 START TEST accel_copy_crc32c 00:04:54.746 ************************************ 00:04:54.746 03:18:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:04:54.746 03:18:32 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.746 03:18:32 -- accel/accel.sh@17 -- # local accel_module 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:54.746 03:18:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:54.746 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:54.746 03:18:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:54.746 03:18:32 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.746 03:18:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.746 03:18:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.746 03:18:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.746 03:18:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.746 03:18:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.746 03:18:32 -- accel/accel.sh@40 -- # local IFS=, 00:04:54.746 03:18:32 -- accel/accel.sh@41 -- # jq -r . 00:04:54.746 [2024-04-19 03:18:32.175074] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:54.746 [2024-04-19 03:18:32.175136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143148 ] 00:04:54.746 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.746 [2024-04-19 03:18:32.236879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.005 [2024-04-19 03:18:32.354479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=0x1 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=0 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=software 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@22 -- # accel_module=software 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=32 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=32 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=1 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val=Yes 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:55.005 03:18:32 -- accel/accel.sh@20 -- # val= 00:04:55.005 03:18:32 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # IFS=: 00:04:55.005 03:18:32 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.384 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.384 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.384 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.384 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.384 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.384 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.384 03:18:33 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:56.384 03:18:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.384 00:04:56.384 real 0m1.459s 00:04:56.384 user 0m1.323s 00:04:56.384 sys 0m0.137s 00:04:56.384 03:18:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.384 03:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.384 ************************************ 00:04:56.384 END TEST accel_copy_crc32c 00:04:56.384 ************************************ 00:04:56.384 03:18:33 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:56.384 03:18:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:56.384 03:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.384 03:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.384 ************************************ 00:04:56.384 START TEST accel_copy_crc32c_C2 00:04:56.384 ************************************ 00:04:56.384 03:18:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:56.384 03:18:33 -- accel/accel.sh@16 -- # local accel_opc 00:04:56.384 03:18:33 -- accel/accel.sh@17 -- # local accel_module 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.384 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.384 03:18:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:56.384 03:18:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:56.384 03:18:33 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.384 03:18:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.384 03:18:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.384 03:18:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.384 03:18:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.384 03:18:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.384 03:18:33 -- accel/accel.sh@40 -- # local IFS=, 00:04:56.384 03:18:33 -- accel/accel.sh@41 -- # jq -r . 00:04:56.384 [2024-04-19 03:18:33.756222] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:56.385 [2024-04-19 03:18:33.756288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143432 ] 00:04:56.385 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.385 [2024-04-19 03:18:33.817906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.385 [2024-04-19 03:18:33.931770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=0x1 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=0 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=software 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@22 -- # accel_module=software 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=32 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=32 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.645 03:18:33 -- accel/accel.sh@20 -- # val=1 00:04:56.645 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.645 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.646 03:18:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.646 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.646 03:18:33 -- accel/accel.sh@20 -- # val=Yes 00:04:56.646 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.646 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.646 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:56.646 03:18:33 -- accel/accel.sh@20 -- # val= 00:04:56.646 03:18:33 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # IFS=: 00:04:56.646 03:18:33 -- accel/accel.sh@19 -- # read -r var val 00:04:58.029 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.030 03:18:35 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:58.030 03:18:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.030 00:04:58.030 real 0m1.467s 00:04:58.030 user 0m1.314s 00:04:58.030 sys 0m0.153s 00:04:58.030 03:18:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.030 03:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.030 ************************************ 00:04:58.030 END TEST accel_copy_crc32c_C2 00:04:58.030 ************************************ 00:04:58.030 03:18:35 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:58.030 03:18:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:58.030 03:18:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.030 03:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.030 ************************************ 00:04:58.030 START TEST accel_dualcast 00:04:58.030 ************************************ 00:04:58.030 03:18:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:04:58.030 03:18:35 -- accel/accel.sh@16 -- # local accel_opc 00:04:58.030 03:18:35 -- accel/accel.sh@17 -- # local accel_module 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:04:58.030 03:18:35 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.030 03:18:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.030 03:18:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.030 03:18:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.030 03:18:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.030 03:18:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.030 03:18:35 -- accel/accel.sh@40 -- # local IFS=, 00:04:58.030 03:18:35 -- accel/accel.sh@41 -- # jq -r . 00:04:58.030 [2024-04-19 03:18:35.337929] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:58.030 [2024-04-19 03:18:35.337994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143602 ] 00:04:58.030 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.030 [2024-04-19 03:18:35.400112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.030 [2024-04-19 03:18:35.518601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=0x1 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=dualcast 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=software 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@22 -- # accel_module=software 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=32 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=32 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=1 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val=Yes 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:58.030 03:18:35 -- accel/accel.sh@20 -- # val= 00:04:58.030 03:18:35 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # IFS=: 00:04:58.030 03:18:35 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@20 -- # val= 00:04:59.411 03:18:36 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@20 -- # val= 00:04:59.411 03:18:36 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@20 -- # val= 00:04:59.411 03:18:36 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@20 -- # val= 00:04:59.411 03:18:36 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@20 -- # val= 00:04:59.411 03:18:36 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@20 -- # val= 00:04:59.411 03:18:36 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.411 03:18:36 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:04:59.411 03:18:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.411 00:04:59.411 real 0m1.471s 00:04:59.411 user 0m1.326s 00:04:59.411 sys 0m0.145s 00:04:59.411 03:18:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.411 03:18:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.411 ************************************ 00:04:59.411 END TEST accel_dualcast 00:04:59.411 ************************************ 00:04:59.411 03:18:36 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:59.411 03:18:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:59.411 03:18:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.411 03:18:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.411 ************************************ 00:04:59.411 START TEST accel_compare 00:04:59.411 ************************************ 00:04:59.411 03:18:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:04:59.411 03:18:36 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.411 03:18:36 -- accel/accel.sh@17 -- # local accel_module 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # IFS=: 00:04:59.411 03:18:36 -- accel/accel.sh@19 -- # read -r var val 00:04:59.411 03:18:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:59.411 03:18:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:04:59.411 03:18:36 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.411 03:18:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.411 03:18:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.411 03:18:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.411 03:18:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.411 03:18:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.411 03:18:36 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.411 03:18:36 -- accel/accel.sh@41 -- # jq -r . 00:04:59.411 [2024-04-19 03:18:36.924262] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:04:59.411 [2024-04-19 03:18:36.924328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143813 ] 00:04:59.411 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.670 [2024-04-19 03:18:36.986538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.670 [2024-04-19 03:18:37.106455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val=0x1 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val=compare 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@23 -- # accel_opc=compare 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.670 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.670 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.670 03:18:37 -- accel/accel.sh@20 -- # val=software 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val=32 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val=32 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val=1 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val=Yes 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:04:59.671 03:18:37 -- accel/accel.sh@20 -- # val= 00:04:59.671 03:18:37 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # IFS=: 00:04:59.671 03:18:37 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.051 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.051 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.051 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.051 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.051 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.051 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.051 03:18:38 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:01.051 03:18:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.051 00:05:01.051 real 0m1.476s 00:05:01.051 user 0m1.332s 00:05:01.051 sys 0m0.144s 00:05:01.051 03:18:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.051 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.051 ************************************ 00:05:01.051 END TEST accel_compare 00:05:01.051 ************************************ 00:05:01.051 03:18:38 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:01.051 03:18:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:01.051 03:18:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.051 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.051 ************************************ 00:05:01.051 START TEST accel_xor 00:05:01.051 ************************************ 00:05:01.051 03:18:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:01.051 03:18:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:01.051 03:18:38 -- accel/accel.sh@17 -- # local accel_module 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.051 03:18:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:01.051 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.051 03:18:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:01.051 03:18:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.051 03:18:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.051 03:18:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.051 03:18:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.051 03:18:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.051 03:18:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.051 03:18:38 -- accel/accel.sh@40 -- # local IFS=, 00:05:01.051 03:18:38 -- accel/accel.sh@41 -- # jq -r . 00:05:01.051 [2024-04-19 03:18:38.518054] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:01.051 [2024-04-19 03:18:38.518119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144046 ] 00:05:01.051 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.052 [2024-04-19 03:18:38.580084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.311 [2024-04-19 03:18:38.700199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.311 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.311 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.311 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.311 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.311 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.311 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.311 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.311 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.311 03:18:38 -- accel/accel.sh@20 -- # val=0x1 00:05:01.311 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.311 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.311 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.311 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.311 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=xor 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=2 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=software 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@22 -- # accel_module=software 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=32 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=32 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=1 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val=Yes 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:01.312 03:18:38 -- accel/accel.sh@20 -- # val= 00:05:01.312 03:18:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # IFS=: 00:05:01.312 03:18:38 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@20 -- # val= 00:05:02.695 03:18:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # IFS=: 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@20 -- # val= 00:05:02.695 03:18:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # IFS=: 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@20 -- # val= 00:05:02.695 03:18:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # IFS=: 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@20 -- # val= 00:05:02.695 03:18:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # IFS=: 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@20 -- # val= 00:05:02.695 03:18:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # IFS=: 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@20 -- # val= 00:05:02.695 03:18:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # IFS=: 00:05:02.695 03:18:39 -- accel/accel.sh@19 -- # read -r var val 00:05:02.695 03:18:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.695 03:18:39 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:02.695 03:18:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.695 00:05:02.695 real 0m1.484s 00:05:02.695 user 0m1.344s 00:05:02.695 sys 0m0.141s 00:05:02.695 03:18:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.695 03:18:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.695 ************************************ 00:05:02.695 END TEST accel_xor 00:05:02.695 ************************************ 00:05:02.695 03:18:40 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:02.695 03:18:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:02.695 03:18:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.695 03:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:02.695 ************************************ 00:05:02.695 START TEST accel_xor 00:05:02.695 ************************************ 00:05:02.695 03:18:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:02.696 03:18:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:02.696 03:18:40 -- accel/accel.sh@17 -- # local accel_module 00:05:02.696 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.696 03:18:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:02.696 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.696 03:18:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:02.696 03:18:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.696 03:18:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.696 03:18:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.696 03:18:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.696 03:18:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.696 03:18:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.696 03:18:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:02.696 03:18:40 -- accel/accel.sh@41 -- # jq -r . 00:05:02.696 [2024-04-19 03:18:40.128372] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:02.696 [2024-04-19 03:18:40.128466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144210 ] 00:05:02.696 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.696 [2024-04-19 03:18:40.193875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.956 [2024-04-19 03:18:40.317210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val=0x1 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val=xor 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val=3 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.956 03:18:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.956 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.956 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val=software 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@22 -- # accel_module=software 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val=32 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val=32 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val=1 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val=Yes 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:02.957 03:18:40 -- accel/accel.sh@20 -- # val= 00:05:02.957 03:18:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # IFS=: 00:05:02.957 03:18:40 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.344 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.344 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.344 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.344 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.344 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.344 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.344 03:18:41 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:04.344 03:18:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.344 00:05:04.344 real 0m1.481s 00:05:04.344 user 0m1.338s 00:05:04.344 sys 0m0.143s 00:05:04.344 03:18:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.344 03:18:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.344 ************************************ 00:05:04.344 END TEST accel_xor 00:05:04.344 ************************************ 00:05:04.344 03:18:41 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:04.344 03:18:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:04.344 03:18:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.344 03:18:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.344 ************************************ 00:05:04.344 START TEST accel_dif_verify 00:05:04.344 ************************************ 00:05:04.344 03:18:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:04.344 03:18:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:04.344 03:18:41 -- accel/accel.sh@17 -- # local accel_module 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.344 03:18:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:04.344 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.344 03:18:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:04.344 03:18:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.344 03:18:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.344 03:18:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.344 03:18:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.344 03:18:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.344 03:18:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.344 03:18:41 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.344 03:18:41 -- accel/accel.sh@41 -- # jq -r . 00:05:04.344 [2024-04-19 03:18:41.728513] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:04.344 [2024-04-19 03:18:41.728580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144490 ] 00:05:04.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.344 [2024-04-19 03:18:41.790762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.603 [2024-04-19 03:18:41.912700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=0x1 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=dif_verify 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=software 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=32 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=32 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=1 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val=No 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:04.603 03:18:41 -- accel/accel.sh@20 -- # val= 00:05:04.603 03:18:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # IFS=: 00:05:04.603 03:18:41 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:05.983 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:05.983 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:05.983 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:05.983 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:05.983 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:05.983 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.983 03:18:43 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:05.983 03:18:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.983 00:05:05.983 real 0m1.487s 00:05:05.983 user 0m1.348s 00:05:05.983 sys 0m0.142s 00:05:05.983 03:18:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.983 03:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.983 ************************************ 00:05:05.983 END TEST accel_dif_verify 00:05:05.983 ************************************ 00:05:05.983 03:18:43 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:05.983 03:18:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:05.983 03:18:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.983 03:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.983 ************************************ 00:05:05.983 START TEST accel_dif_generate 00:05:05.983 ************************************ 00:05:05.983 03:18:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:05.983 03:18:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.983 03:18:43 -- accel/accel.sh@17 -- # local accel_module 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:05.983 03:18:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:05.983 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:05.983 03:18:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:05.983 03:18:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.983 03:18:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.983 03:18:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.983 03:18:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.983 03:18:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.983 03:18:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.983 03:18:43 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.983 03:18:43 -- accel/accel.sh@41 -- # jq -r . 00:05:05.983 [2024-04-19 03:18:43.343402] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:05.983 [2024-04-19 03:18:43.343481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144663 ] 00:05:05.983 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.983 [2024-04-19 03:18:43.408766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.983 [2024-04-19 03:18:43.531601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=0x1 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=dif_generate 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=software 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@22 -- # accel_module=software 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=32 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=32 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=1 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val=No 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:06.242 03:18:43 -- accel/accel.sh@20 -- # val= 00:05:06.242 03:18:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # IFS=: 00:05:06.242 03:18:43 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@20 -- # val= 00:05:07.618 03:18:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@20 -- # val= 00:05:07.618 03:18:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@20 -- # val= 00:05:07.618 03:18:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@20 -- # val= 00:05:07.618 03:18:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@20 -- # val= 00:05:07.618 03:18:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@20 -- # val= 00:05:07.618 03:18:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.618 03:18:44 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:07.618 03:18:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.618 00:05:07.618 real 0m1.493s 00:05:07.618 user 0m1.349s 00:05:07.618 sys 0m0.147s 00:05:07.618 03:18:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.618 03:18:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.618 ************************************ 00:05:07.618 END TEST accel_dif_generate 00:05:07.618 ************************************ 00:05:07.618 03:18:44 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:07.618 03:18:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:07.618 03:18:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.618 03:18:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.618 ************************************ 00:05:07.618 START TEST accel_dif_generate_copy 00:05:07.618 ************************************ 00:05:07.618 03:18:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:07.618 03:18:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.618 03:18:44 -- accel/accel.sh@17 -- # local accel_module 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # IFS=: 00:05:07.618 03:18:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:07.618 03:18:44 -- accel/accel.sh@19 -- # read -r var val 00:05:07.618 03:18:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:07.618 03:18:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.618 03:18:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.618 03:18:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.618 03:18:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.618 03:18:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.618 03:18:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.618 03:18:44 -- accel/accel.sh@40 -- # local IFS=, 00:05:07.618 03:18:44 -- accel/accel.sh@41 -- # jq -r . 00:05:07.618 [2024-04-19 03:18:44.953348] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:07.618 [2024-04-19 03:18:44.953426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144938 ] 00:05:07.618 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.618 [2024-04-19 03:18:45.015093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.618 [2024-04-19 03:18:45.138196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.877 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.877 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.877 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.877 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.877 03:18:45 -- accel/accel.sh@20 -- # val=0x1 00:05:07.877 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.877 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.877 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.877 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.877 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.877 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.877 03:18:45 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:07.877 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.877 03:18:45 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val=software 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@22 -- # accel_module=software 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val=32 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val=32 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val=1 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val=No 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:07.878 03:18:45 -- accel/accel.sh@20 -- # val= 00:05:07.878 03:18:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # IFS=: 00:05:07.878 03:18:45 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.256 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.256 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.256 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.256 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.256 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.256 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.256 03:18:46 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:09.256 03:18:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.256 00:05:09.256 real 0m1.479s 00:05:09.256 user 0m1.331s 00:05:09.256 sys 0m0.148s 00:05:09.256 03:18:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.256 03:18:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.256 ************************************ 00:05:09.256 END TEST accel_dif_generate_copy 00:05:09.256 ************************************ 00:05:09.256 03:18:46 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:09.256 03:18:46 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:09.256 03:18:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:09.256 03:18:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.256 03:18:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.256 ************************************ 00:05:09.256 START TEST accel_comp 00:05:09.256 ************************************ 00:05:09.256 03:18:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:09.256 03:18:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.256 03:18:46 -- accel/accel.sh@17 -- # local accel_module 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.256 03:18:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:09.256 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.256 03:18:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:09.256 03:18:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.256 03:18:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.256 03:18:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.256 03:18:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.256 03:18:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.256 03:18:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.256 03:18:46 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.256 03:18:46 -- accel/accel.sh@41 -- # jq -r . 00:05:09.256 [2024-04-19 03:18:46.566699] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:09.256 [2024-04-19 03:18:46.566766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145108 ] 00:05:09.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.256 [2024-04-19 03:18:46.634084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.256 [2024-04-19 03:18:46.754434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=0x1 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=compress 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=software 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=32 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=32 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=1 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val=No 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:09.515 03:18:46 -- accel/accel.sh@20 -- # val= 00:05:09.515 03:18:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # IFS=: 00:05:09.515 03:18:46 -- accel/accel.sh@19 -- # read -r var val 00:05:10.481 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:10.481 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.481 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.481 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.481 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:10.482 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.482 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:10.482 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.482 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:10.482 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.482 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:10.482 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.482 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:10.482 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.482 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.482 03:18:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.482 03:18:48 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:10.482 03:18:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.482 00:05:10.482 real 0m1.477s 00:05:10.482 user 0m1.332s 00:05:10.482 sys 0m0.147s 00:05:10.482 03:18:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.482 03:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.482 ************************************ 00:05:10.482 END TEST accel_comp 00:05:10.482 ************************************ 00:05:10.742 03:18:48 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.742 03:18:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:10.742 03:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.742 03:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.742 ************************************ 00:05:10.742 START TEST accel_decomp 00:05:10.742 ************************************ 00:05:10.742 03:18:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.742 03:18:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.742 03:18:48 -- accel/accel.sh@17 -- # local accel_module 00:05:10.742 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:10.742 03:18:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.742 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:10.742 03:18:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.742 03:18:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.742 03:18:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.742 03:18:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.742 03:18:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.742 03:18:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.742 03:18:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.742 03:18:48 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.742 03:18:48 -- accel/accel.sh@41 -- # jq -r . 00:05:10.742 [2024-04-19 03:18:48.168957] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:10.742 [2024-04-19 03:18:48.169019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145272 ] 00:05:10.742 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.742 [2024-04-19 03:18:48.230737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.000 [2024-04-19 03:18:48.352691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=0x1 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=decompress 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=software 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@22 -- # accel_module=software 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=32 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=32 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=1 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val=Yes 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:11.000 03:18:48 -- accel/accel.sh@20 -- # val= 00:05:11.000 03:18:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # IFS=: 00:05:11.000 03:18:48 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@20 -- # val= 00:05:12.378 03:18:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@20 -- # val= 00:05:12.378 03:18:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@20 -- # val= 00:05:12.378 03:18:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@20 -- # val= 00:05:12.378 03:18:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@20 -- # val= 00:05:12.378 03:18:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@20 -- # val= 00:05:12.378 03:18:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.378 03:18:49 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:12.378 03:18:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.378 00:05:12.378 real 0m1.481s 00:05:12.378 user 0m1.338s 00:05:12.378 sys 0m0.146s 00:05:12.378 03:18:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.378 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.378 ************************************ 00:05:12.378 END TEST accel_decomp 00:05:12.378 ************************************ 00:05:12.378 03:18:49 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:12.378 03:18:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:12.378 03:18:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.378 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.378 ************************************ 00:05:12.378 START TEST accel_decmop_full 00:05:12.378 ************************************ 00:05:12.378 03:18:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:12.378 03:18:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.378 03:18:49 -- accel/accel.sh@17 -- # local accel_module 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # IFS=: 00:05:12.378 03:18:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:12.378 03:18:49 -- accel/accel.sh@19 -- # read -r var val 00:05:12.378 03:18:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:12.378 03:18:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.378 03:18:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.378 03:18:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.378 03:18:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.378 03:18:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.378 03:18:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.378 03:18:49 -- accel/accel.sh@40 -- # local IFS=, 00:05:12.378 03:18:49 -- accel/accel.sh@41 -- # jq -r . 00:05:12.379 [2024-04-19 03:18:49.772512] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:12.379 [2024-04-19 03:18:49.772598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145556 ] 00:05:12.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.379 [2024-04-19 03:18:49.836322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.639 [2024-04-19 03:18:49.957978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=0x1 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=decompress 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=software 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@22 -- # accel_module=software 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=32 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=32 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=1 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val=Yes 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:12.639 03:18:50 -- accel/accel.sh@20 -- # val= 00:05:12.639 03:18:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # IFS=: 00:05:12.639 03:18:50 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.021 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.021 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.021 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.021 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.021 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.021 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.021 03:18:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:14.021 03:18:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.021 00:05:14.021 real 0m1.507s 00:05:14.021 user 0m1.354s 00:05:14.021 sys 0m0.154s 00:05:14.021 03:18:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.021 03:18:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.021 ************************************ 00:05:14.021 END TEST accel_decmop_full 00:05:14.021 ************************************ 00:05:14.021 03:18:51 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.021 03:18:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:14.021 03:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.021 03:18:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.021 ************************************ 00:05:14.021 START TEST accel_decomp_mcore 00:05:14.021 ************************************ 00:05:14.021 03:18:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.021 03:18:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.021 03:18:51 -- accel/accel.sh@17 -- # local accel_module 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.021 03:18:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.021 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.021 03:18:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.021 03:18:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.021 03:18:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.021 03:18:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.021 03:18:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.021 03:18:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.021 03:18:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.021 03:18:51 -- accel/accel.sh@40 -- # local IFS=, 00:05:14.021 03:18:51 -- accel/accel.sh@41 -- # jq -r . 00:05:14.021 [2024-04-19 03:18:51.397370] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:14.021 [2024-04-19 03:18:51.397456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145718 ] 00:05:14.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.021 [2024-04-19 03:18:51.462228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.281 [2024-04-19 03:18:51.585931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.281 [2024-04-19 03:18:51.585982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.281 [2024-04-19 03:18:51.586033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.281 [2024-04-19 03:18:51.586037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=0xf 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=decompress 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=software 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@22 -- # accel_module=software 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=32 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=32 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=1 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val=Yes 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:14.281 03:18:51 -- accel/accel.sh@20 -- # val= 00:05:14.281 03:18:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # IFS=: 00:05:14.281 03:18:51 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@20 -- # val= 00:05:15.664 03:18:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.664 03:18:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:15.664 03:18:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.664 00:05:15.664 real 0m1.489s 00:05:15.664 user 0m4.778s 00:05:15.664 sys 0m0.159s 00:05:15.664 03:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.664 03:18:52 -- common/autotest_common.sh@10 -- # set +x 00:05:15.664 ************************************ 00:05:15.664 END TEST accel_decomp_mcore 00:05:15.664 ************************************ 00:05:15.664 03:18:52 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:15.664 03:18:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:15.664 03:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.664 03:18:52 -- common/autotest_common.sh@10 -- # set +x 00:05:15.664 ************************************ 00:05:15.664 START TEST accel_decomp_full_mcore 00:05:15.664 ************************************ 00:05:15.664 03:18:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:15.664 03:18:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.664 03:18:52 -- accel/accel.sh@17 -- # local accel_module 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # IFS=: 00:05:15.664 03:18:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:15.664 03:18:52 -- accel/accel.sh@19 -- # read -r var val 00:05:15.664 03:18:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:15.664 03:18:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.664 03:18:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.664 03:18:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.664 03:18:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.664 03:18:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.664 03:18:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.664 03:18:52 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.664 03:18:52 -- accel/accel.sh@41 -- # jq -r . 00:05:15.664 [2024-04-19 03:18:53.008647] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:15.664 [2024-04-19 03:18:53.008714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146003 ] 00:05:15.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.664 [2024-04-19 03:18:53.071772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.664 [2024-04-19 03:18:53.196987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.664 [2024-04-19 03:18:53.197046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.664 [2024-04-19 03:18:53.197098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.664 [2024-04-19 03:18:53.197101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=0xf 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=decompress 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=software 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=32 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=32 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=1 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val=Yes 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:15.924 03:18:53 -- accel/accel.sh@20 -- # val= 00:05:15.924 03:18:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # IFS=: 00:05:15.924 03:18:53 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.302 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.302 03:18:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.302 03:18:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.302 00:05:17.302 real 0m1.504s 00:05:17.302 user 0m4.833s 00:05:17.302 sys 0m0.158s 00:05:17.302 03:18:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.302 03:18:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.302 ************************************ 00:05:17.302 END TEST accel_decomp_full_mcore 00:05:17.302 ************************************ 00:05:17.302 03:18:54 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.302 03:18:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:17.302 03:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.302 03:18:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.302 ************************************ 00:05:17.302 START TEST accel_decomp_mthread 00:05:17.302 ************************************ 00:05:17.302 03:18:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.302 03:18:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.302 03:18:54 -- accel/accel.sh@17 -- # local accel_module 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.302 03:18:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.302 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.302 03:18:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.302 03:18:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.302 03:18:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.302 03:18:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.302 03:18:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.302 03:18:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.302 03:18:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.302 03:18:54 -- accel/accel.sh@40 -- # local IFS=, 00:05:17.302 03:18:54 -- accel/accel.sh@41 -- # jq -r . 00:05:17.302 [2024-04-19 03:18:54.628373] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:17.302 [2024-04-19 03:18:54.628468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146178 ] 00:05:17.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.302 [2024-04-19 03:18:54.692295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.302 [2024-04-19 03:18:54.815072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=0x1 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=decompress 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=software 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@22 -- # accel_module=software 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=32 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=32 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=2 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val=Yes 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:17.561 03:18:54 -- accel/accel.sh@20 -- # val= 00:05:17.561 03:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # IFS=: 00:05:17.561 03:18:54 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:18.942 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.942 03:18:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.942 03:18:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.942 00:05:18.942 real 0m1.497s 00:05:18.942 user 0m1.354s 00:05:18.942 sys 0m0.144s 00:05:18.942 03:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.942 03:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:18.942 ************************************ 00:05:18.942 END TEST accel_decomp_mthread 00:05:18.942 ************************************ 00:05:18.942 03:18:56 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:18.942 03:18:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:18.942 03:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.942 03:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:18.942 ************************************ 00:05:18.942 START TEST accel_deomp_full_mthread 00:05:18.942 ************************************ 00:05:18.942 03:18:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:18.942 03:18:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.942 03:18:56 -- accel/accel.sh@17 -- # local accel_module 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:18.942 03:18:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:18.942 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:18.942 03:18:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:18.942 03:18:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.942 03:18:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.942 03:18:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.942 03:18:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.942 03:18:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.942 03:18:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.942 03:18:56 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.942 03:18:56 -- accel/accel.sh@41 -- # jq -r . 00:05:18.942 [2024-04-19 03:18:56.250633] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:18.942 [2024-04-19 03:18:56.250708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146350 ] 00:05:18.942 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.942 [2024-04-19 03:18:56.315542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.942 [2024-04-19 03:18:56.438045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=0x1 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=decompress 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=software 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@22 -- # accel_module=software 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=32 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=32 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=2 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val=Yes 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:19.203 03:18:56 -- accel/accel.sh@20 -- # val= 00:05:19.203 03:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # IFS=: 00:05:19.203 03:18:56 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@20 -- # val= 00:05:20.591 03:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # IFS=: 00:05:20.591 03:18:57 -- accel/accel.sh@19 -- # read -r var val 00:05:20.591 03:18:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.591 03:18:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.591 03:18:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.591 00:05:20.591 real 0m1.524s 00:05:20.591 user 0m1.374s 00:05:20.591 sys 0m0.151s 00:05:20.591 03:18:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.591 03:18:57 -- common/autotest_common.sh@10 -- # set +x 00:05:20.591 ************************************ 00:05:20.591 END TEST accel_deomp_full_mthread 00:05:20.591 ************************************ 00:05:20.591 03:18:57 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:20.592 03:18:57 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:20.592 03:18:57 -- accel/accel.sh@137 -- # build_accel_config 00:05:20.592 03:18:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.592 03:18:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:20.592 03:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.592 03:18:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.592 03:18:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.592 03:18:57 -- common/autotest_common.sh@10 -- # set +x 00:05:20.592 03:18:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.592 03:18:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.592 03:18:57 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.592 03:18:57 -- accel/accel.sh@41 -- # jq -r . 00:05:20.592 ************************************ 00:05:20.592 START TEST accel_dif_functional_tests 00:05:20.592 ************************************ 00:05:20.592 03:18:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:20.592 [2024-04-19 03:18:57.910715] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:20.592 [2024-04-19 03:18:57.910776] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146625 ] 00:05:20.592 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.592 [2024-04-19 03:18:57.970325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.592 [2024-04-19 03:18:58.095208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.592 [2024-04-19 03:18:58.095263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.592 [2024-04-19 03:18:58.095267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.851 00:05:20.851 00:05:20.851 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.851 http://cunit.sourceforge.net/ 00:05:20.851 00:05:20.851 00:05:20.851 Suite: accel_dif 00:05:20.851 Test: verify: DIF generated, GUARD check ...passed 00:05:20.851 Test: verify: DIF generated, APPTAG check ...passed 00:05:20.851 Test: verify: DIF generated, REFTAG check ...passed 00:05:20.851 Test: verify: DIF not generated, GUARD check ...[2024-04-19 03:18:58.197637] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:20.851 [2024-04-19 03:18:58.197704] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:20.851 passed 00:05:20.851 Test: verify: DIF not generated, APPTAG check ...[2024-04-19 03:18:58.197748] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:20.851 [2024-04-19 03:18:58.197779] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:20.851 passed 00:05:20.851 Test: verify: DIF not generated, REFTAG check ...[2024-04-19 03:18:58.197817] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:20.851 [2024-04-19 03:18:58.197847] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:20.851 passed 00:05:20.851 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:20.851 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-19 03:18:58.197919] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:20.851 passed 00:05:20.851 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:20.851 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:20.851 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:20.851 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-19 03:18:58.198079] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:20.851 passed 00:05:20.851 Test: generate copy: DIF generated, GUARD check ...passed 00:05:20.851 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:20.851 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:20.851 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:20.851 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:20.851 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:20.851 Test: generate copy: iovecs-len validate ...[2024-04-19 03:18:58.198349] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:20.851 passed 00:05:20.851 Test: generate copy: buffer alignment validate ...passed 00:05:20.851 00:05:20.851 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.851 suites 1 1 n/a 0 0 00:05:20.851 tests 20 20 20 0 0 00:05:20.851 asserts 204 204 204 0 n/a 00:05:20.851 00:05:20.851 Elapsed time = 0.003 seconds 00:05:21.110 00:05:21.110 real 0m0.600s 00:05:21.110 user 0m0.887s 00:05:21.110 sys 0m0.176s 00:05:21.110 03:18:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.110 03:18:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.110 ************************************ 00:05:21.110 END TEST accel_dif_functional_tests 00:05:21.110 ************************************ 00:05:21.110 00:05:21.110 real 0m35.929s 00:05:21.110 user 0m38.257s 00:05:21.110 sys 0m5.593s 00:05:21.110 03:18:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.110 03:18:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.110 ************************************ 00:05:21.110 END TEST accel 00:05:21.110 ************************************ 00:05:21.110 03:18:58 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:21.110 03:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.110 03:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.110 03:18:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.110 ************************************ 00:05:21.110 START TEST accel_rpc 00:05:21.110 ************************************ 00:05:21.110 03:18:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:21.110 * Looking for test storage... 00:05:21.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:21.110 03:18:58 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.110 03:18:58 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=146820 00:05:21.110 03:18:58 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:21.110 03:18:58 -- accel/accel_rpc.sh@15 -- # waitforlisten 146820 00:05:21.110 03:18:58 -- common/autotest_common.sh@817 -- # '[' -z 146820 ']' 00:05:21.110 03:18:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.110 03:18:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.110 03:18:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.110 03:18:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.110 03:18:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.369 [2024-04-19 03:18:58.707560] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:21.369 [2024-04-19 03:18:58.707640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146820 ] 00:05:21.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.369 [2024-04-19 03:18:58.769000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.369 [2024-04-19 03:18:58.889455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.308 03:18:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.308 03:18:59 -- common/autotest_common.sh@850 -- # return 0 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:22.308 03:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.308 03:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.308 03:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.308 ************************************ 00:05:22.308 START TEST accel_assign_opcode 00:05:22.308 ************************************ 00:05:22.308 03:18:59 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:22.308 03:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.308 03:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.308 [2024-04-19 03:18:59.744134] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:22.308 03:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:22.308 03:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.308 03:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.308 [2024-04-19 03:18:59.752127] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:22.308 03:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.308 03:18:59 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:22.308 03:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.308 03:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.568 03:19:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.568 03:19:00 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:22.568 03:19:00 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:22.568 03:19:00 -- accel/accel_rpc.sh@42 -- # grep software 00:05:22.568 03:19:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.568 03:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.568 03:19:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.568 software 00:05:22.568 00:05:22.568 real 0m0.306s 00:05:22.568 user 0m0.041s 00:05:22.568 sys 0m0.008s 00:05:22.568 03:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.568 03:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.568 ************************************ 00:05:22.568 END TEST accel_assign_opcode 00:05:22.568 ************************************ 00:05:22.568 03:19:00 -- accel/accel_rpc.sh@55 -- # killprocess 146820 00:05:22.568 03:19:00 -- common/autotest_common.sh@936 -- # '[' -z 146820 ']' 00:05:22.568 03:19:00 -- common/autotest_common.sh@940 -- # kill -0 146820 00:05:22.568 03:19:00 -- common/autotest_common.sh@941 -- # uname 00:05:22.568 03:19:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.568 03:19:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146820 00:05:22.568 03:19:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.568 03:19:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.568 03:19:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146820' 00:05:22.568 killing process with pid 146820 00:05:22.568 03:19:00 -- common/autotest_common.sh@955 -- # kill 146820 00:05:22.568 03:19:00 -- common/autotest_common.sh@960 -- # wait 146820 00:05:23.136 00:05:23.136 real 0m1.980s 00:05:23.136 user 0m2.100s 00:05:23.136 sys 0m0.531s 00:05:23.137 03:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.137 03:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.137 ************************************ 00:05:23.137 END TEST accel_rpc 00:05:23.137 ************************************ 00:05:23.137 03:19:00 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.137 03:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.137 03:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.137 03:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.395 ************************************ 00:05:23.395 START TEST app_cmdline 00:05:23.395 ************************************ 00:05:23.395 03:19:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.395 * Looking for test storage... 00:05:23.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.395 03:19:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:23.395 03:19:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=147100 00:05:23.395 03:19:00 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:23.395 03:19:00 -- app/cmdline.sh@18 -- # waitforlisten 147100 00:05:23.395 03:19:00 -- common/autotest_common.sh@817 -- # '[' -z 147100 ']' 00:05:23.395 03:19:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.395 03:19:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.395 03:19:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.395 03:19:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.395 03:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.395 [2024-04-19 03:19:00.826594] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:23.395 [2024-04-19 03:19:00.826697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147100 ] 00:05:23.395 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.395 [2024-04-19 03:19:00.885667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.654 [2024-04-19 03:19:00.990933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.912 03:19:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.912 03:19:01 -- common/autotest_common.sh@850 -- # return 0 00:05:23.912 03:19:01 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:24.170 { 00:05:24.170 "version": "SPDK v24.05-pre git sha1 c064dc584", 00:05:24.170 "fields": { 00:05:24.170 "major": 24, 00:05:24.170 "minor": 5, 00:05:24.170 "patch": 0, 00:05:24.170 "suffix": "-pre", 00:05:24.170 "commit": "c064dc584" 00:05:24.170 } 00:05:24.170 } 00:05:24.170 03:19:01 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:24.170 03:19:01 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:24.170 03:19:01 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:24.170 03:19:01 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:24.170 03:19:01 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:24.170 03:19:01 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:24.170 03:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.170 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.170 03:19:01 -- app/cmdline.sh@26 -- # sort 00:05:24.170 03:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.170 03:19:01 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:24.170 03:19:01 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:24.170 03:19:01 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.170 03:19:01 -- common/autotest_common.sh@638 -- # local es=0 00:05:24.170 03:19:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.170 03:19:01 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.170 03:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.170 03:19:01 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.170 03:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.170 03:19:01 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.170 03:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.170 03:19:01 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.170 03:19:01 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:24.170 03:19:01 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.429 request: 00:05:24.429 { 00:05:24.429 "method": "env_dpdk_get_mem_stats", 00:05:24.429 "req_id": 1 00:05:24.429 } 00:05:24.429 Got JSON-RPC error response 00:05:24.429 response: 00:05:24.429 { 00:05:24.429 "code": -32601, 00:05:24.429 "message": "Method not found" 00:05:24.429 } 00:05:24.429 03:19:01 -- common/autotest_common.sh@641 -- # es=1 00:05:24.429 03:19:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:24.429 03:19:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:24.429 03:19:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:24.429 03:19:01 -- app/cmdline.sh@1 -- # killprocess 147100 00:05:24.429 03:19:01 -- common/autotest_common.sh@936 -- # '[' -z 147100 ']' 00:05:24.429 03:19:01 -- common/autotest_common.sh@940 -- # kill -0 147100 00:05:24.429 03:19:01 -- common/autotest_common.sh@941 -- # uname 00:05:24.429 03:19:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.429 03:19:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147100 00:05:24.429 03:19:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.429 03:19:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.429 03:19:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147100' 00:05:24.429 killing process with pid 147100 00:05:24.429 03:19:01 -- common/autotest_common.sh@955 -- # kill 147100 00:05:24.429 03:19:01 -- common/autotest_common.sh@960 -- # wait 147100 00:05:24.998 00:05:24.998 real 0m1.556s 00:05:24.998 user 0m1.843s 00:05:24.998 sys 0m0.479s 00:05:24.998 03:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.998 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.998 ************************************ 00:05:24.998 END TEST app_cmdline 00:05:24.998 ************************************ 00:05:24.998 03:19:02 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:24.998 03:19:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.998 03:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.998 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.998 ************************************ 00:05:24.998 START TEST version 00:05:24.998 ************************************ 00:05:24.998 03:19:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:24.998 * Looking for test storage... 00:05:24.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:24.998 03:19:02 -- app/version.sh@17 -- # get_header_version major 00:05:24.998 03:19:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.998 03:19:02 -- app/version.sh@14 -- # cut -f2 00:05:24.998 03:19:02 -- app/version.sh@14 -- # tr -d '"' 00:05:24.998 03:19:02 -- app/version.sh@17 -- # major=24 00:05:24.998 03:19:02 -- app/version.sh@18 -- # get_header_version minor 00:05:24.998 03:19:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.998 03:19:02 -- app/version.sh@14 -- # cut -f2 00:05:24.998 03:19:02 -- app/version.sh@14 -- # tr -d '"' 00:05:24.998 03:19:02 -- app/version.sh@18 -- # minor=5 00:05:24.998 03:19:02 -- app/version.sh@19 -- # get_header_version patch 00:05:24.998 03:19:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.998 03:19:02 -- app/version.sh@14 -- # cut -f2 00:05:24.998 03:19:02 -- app/version.sh@14 -- # tr -d '"' 00:05:24.998 03:19:02 -- app/version.sh@19 -- # patch=0 00:05:24.998 03:19:02 -- app/version.sh@20 -- # get_header_version suffix 00:05:24.998 03:19:02 -- app/version.sh@14 -- # cut -f2 00:05:24.998 03:19:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.998 03:19:02 -- app/version.sh@14 -- # tr -d '"' 00:05:24.998 03:19:02 -- app/version.sh@20 -- # suffix=-pre 00:05:24.998 03:19:02 -- app/version.sh@22 -- # version=24.5 00:05:24.998 03:19:02 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:24.998 03:19:02 -- app/version.sh@28 -- # version=24.5rc0 00:05:24.998 03:19:02 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:24.998 03:19:02 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:24.998 03:19:02 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:24.998 03:19:02 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:24.999 00:05:24.999 real 0m0.106s 00:05:24.999 user 0m0.053s 00:05:24.999 sys 0m0.075s 00:05:24.999 03:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.999 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.999 ************************************ 00:05:24.999 END TEST version 00:05:24.999 ************************************ 00:05:24.999 03:19:02 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@194 -- # uname -s 00:05:24.999 03:19:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:24.999 03:19:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:24.999 03:19:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:24.999 03:19:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:24.999 03:19:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:24.999 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.999 03:19:02 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:24.999 03:19:02 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:05:24.999 03:19:02 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.999 03:19:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:24.999 03:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.999 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.257 ************************************ 00:05:25.258 START TEST nvmf_tcp 00:05:25.258 ************************************ 00:05:25.258 03:19:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:25.258 * Looking for test storage... 00:05:25.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.258 03:19:02 -- nvmf/common.sh@7 -- # uname -s 00:05:25.258 03:19:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.258 03:19:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.258 03:19:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.258 03:19:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.258 03:19:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.258 03:19:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.258 03:19:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.258 03:19:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.258 03:19:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.258 03:19:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.258 03:19:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.258 03:19:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.258 03:19:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.258 03:19:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.258 03:19:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.258 03:19:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.258 03:19:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.258 03:19:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.258 03:19:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.258 03:19:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.258 03:19:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.258 03:19:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.258 03:19:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.258 03:19:02 -- paths/export.sh@5 -- # export PATH 00:05:25.258 03:19:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.258 03:19:02 -- nvmf/common.sh@47 -- # : 0 00:05:25.258 03:19:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.258 03:19:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.258 03:19:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.258 03:19:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.258 03:19:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.258 03:19:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.258 03:19:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.258 03:19:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:25.258 03:19:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.258 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:25.258 03:19:02 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:25.258 03:19:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:25.258 03:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.258 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.517 ************************************ 00:05:25.517 START TEST nvmf_example 00:05:25.517 ************************************ 00:05:25.517 03:19:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:25.517 * Looking for test storage... 00:05:25.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:25.517 03:19:02 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.517 03:19:02 -- nvmf/common.sh@7 -- # uname -s 00:05:25.517 03:19:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.517 03:19:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.517 03:19:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.517 03:19:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.517 03:19:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.517 03:19:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.517 03:19:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.517 03:19:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.517 03:19:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.517 03:19:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.517 03:19:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.517 03:19:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.517 03:19:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.517 03:19:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.517 03:19:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.517 03:19:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.517 03:19:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.518 03:19:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.518 03:19:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.518 03:19:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.518 03:19:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.518 03:19:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.518 03:19:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.518 03:19:02 -- paths/export.sh@5 -- # export PATH 00:05:25.518 03:19:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.518 03:19:02 -- nvmf/common.sh@47 -- # : 0 00:05:25.518 03:19:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.518 03:19:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.518 03:19:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.518 03:19:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.518 03:19:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.518 03:19:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.518 03:19:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.518 03:19:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.518 03:19:02 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:25.518 03:19:02 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:25.518 03:19:02 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:25.518 03:19:02 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:25.518 03:19:02 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:25.518 03:19:02 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:25.518 03:19:02 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:25.518 03:19:02 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:25.518 03:19:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.518 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.518 03:19:02 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:25.518 03:19:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:25.518 03:19:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:25.518 03:19:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:25.518 03:19:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:25.518 03:19:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:25.518 03:19:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:25.518 03:19:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:25.518 03:19:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:25.518 03:19:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:25.518 03:19:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:25.518 03:19:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:25.518 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.427 03:19:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:27.427 03:19:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:27.427 03:19:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:27.427 03:19:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:27.427 03:19:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:27.427 03:19:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:27.427 03:19:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:27.427 03:19:04 -- nvmf/common.sh@295 -- # net_devs=() 00:05:27.427 03:19:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:27.427 03:19:04 -- nvmf/common.sh@296 -- # e810=() 00:05:27.427 03:19:04 -- nvmf/common.sh@296 -- # local -ga e810 00:05:27.427 03:19:04 -- nvmf/common.sh@297 -- # x722=() 00:05:27.427 03:19:04 -- nvmf/common.sh@297 -- # local -ga x722 00:05:27.427 03:19:04 -- nvmf/common.sh@298 -- # mlx=() 00:05:27.427 03:19:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:27.427 03:19:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:27.427 03:19:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:27.427 03:19:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:27.427 03:19:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:27.427 03:19:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:27.427 03:19:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:27.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:27.427 03:19:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:27.427 03:19:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:27.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:27.427 03:19:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:27.427 03:19:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:27.427 03:19:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.427 03:19:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:27.427 03:19:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.427 03:19:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:27.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:27.427 03:19:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.427 03:19:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:27.427 03:19:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.427 03:19:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:27.427 03:19:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.427 03:19:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:27.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:27.427 03:19:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.427 03:19:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:27.427 03:19:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:27.427 03:19:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:27.427 03:19:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:27.427 03:19:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:27.427 03:19:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:27.427 03:19:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:27.427 03:19:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:27.427 03:19:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:27.428 03:19:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:27.428 03:19:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:27.428 03:19:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:27.428 03:19:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:27.428 03:19:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:27.428 03:19:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:27.428 03:19:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:27.428 03:19:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:27.428 03:19:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:27.428 03:19:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:27.428 03:19:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:27.428 03:19:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:27.724 03:19:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:27.724 03:19:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:27.724 03:19:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:27.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:27.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:05:27.724 00:05:27.724 --- 10.0.0.2 ping statistics --- 00:05:27.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.724 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:05:27.724 03:19:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:27.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:27.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:05:27.724 00:05:27.724 --- 10.0.0.1 ping statistics --- 00:05:27.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.724 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:05:27.724 03:19:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:27.724 03:19:05 -- nvmf/common.sh@411 -- # return 0 00:05:27.724 03:19:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:27.724 03:19:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:27.724 03:19:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:27.724 03:19:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:27.724 03:19:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:27.724 03:19:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:27.724 03:19:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:27.724 03:19:05 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:27.724 03:19:05 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:27.724 03:19:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.724 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.724 03:19:05 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:27.724 03:19:05 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:27.724 03:19:05 -- target/nvmf_example.sh@34 -- # nvmfpid=149100 00:05:27.724 03:19:05 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:27.724 03:19:05 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:27.724 03:19:05 -- target/nvmf_example.sh@36 -- # waitforlisten 149100 00:05:27.724 03:19:05 -- common/autotest_common.sh@817 -- # '[' -z 149100 ']' 00:05:27.724 03:19:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.724 03:19:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.724 03:19:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.724 03:19:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.724 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.982 03:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.982 03:19:05 -- common/autotest_common.sh@850 -- # return 0 00:05:27.982 03:19:05 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:27.982 03:19:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.982 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 03:19:05 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:27.982 03:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.982 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 03:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.982 03:19:05 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:27.982 03:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.982 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 03:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.982 03:19:05 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:27.982 03:19:05 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.982 03:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.982 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 03:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.982 03:19:05 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:27.982 03:19:05 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:27.982 03:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.982 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 03:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.982 03:19:05 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:27.982 03:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.982 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 03:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.982 03:19:05 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:27.982 03:19:05 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:27.982 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.182 Initializing NVMe Controllers 00:05:40.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:40.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:40.182 Initialization complete. Launching workers. 00:05:40.182 ======================================================== 00:05:40.182 Latency(us) 00:05:40.182 Device Information : IOPS MiB/s Average min max 00:05:40.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14260.09 55.70 4489.90 905.13 16051.50 00:05:40.182 ======================================================== 00:05:40.182 Total : 14260.09 55.70 4489.90 905.13 16051.50 00:05:40.182 00:05:40.182 03:19:15 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:40.182 03:19:15 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:40.182 03:19:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:40.182 03:19:15 -- nvmf/common.sh@117 -- # sync 00:05:40.182 03:19:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:40.182 03:19:15 -- nvmf/common.sh@120 -- # set +e 00:05:40.182 03:19:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:40.182 03:19:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:40.182 rmmod nvme_tcp 00:05:40.182 rmmod nvme_fabrics 00:05:40.182 rmmod nvme_keyring 00:05:40.182 03:19:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:40.182 03:19:15 -- nvmf/common.sh@124 -- # set -e 00:05:40.182 03:19:15 -- nvmf/common.sh@125 -- # return 0 00:05:40.182 03:19:15 -- nvmf/common.sh@478 -- # '[' -n 149100 ']' 00:05:40.182 03:19:15 -- nvmf/common.sh@479 -- # killprocess 149100 00:05:40.182 03:19:15 -- common/autotest_common.sh@936 -- # '[' -z 149100 ']' 00:05:40.182 03:19:15 -- common/autotest_common.sh@940 -- # kill -0 149100 00:05:40.182 03:19:15 -- common/autotest_common.sh@941 -- # uname 00:05:40.182 03:19:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.182 03:19:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149100 00:05:40.182 03:19:15 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:05:40.182 03:19:15 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:05:40.182 03:19:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149100' 00:05:40.182 killing process with pid 149100 00:05:40.182 03:19:15 -- common/autotest_common.sh@955 -- # kill 149100 00:05:40.182 03:19:15 -- common/autotest_common.sh@960 -- # wait 149100 00:05:40.182 nvmf threads initialize successfully 00:05:40.182 bdev subsystem init successfully 00:05:40.182 created a nvmf target service 00:05:40.182 create targets's poll groups done 00:05:40.182 all subsystems of target started 00:05:40.182 nvmf target is running 00:05:40.182 all subsystems of target stopped 00:05:40.182 destroy targets's poll groups done 00:05:40.182 destroyed the nvmf target service 00:05:40.182 bdev subsystem finish successfully 00:05:40.182 nvmf threads destroy successfully 00:05:40.182 03:19:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:40.182 03:19:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:05:40.182 03:19:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:05:40.182 03:19:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:40.182 03:19:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:40.182 03:19:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.182 03:19:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:40.182 03:19:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.752 03:19:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:40.752 03:19:18 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:40.752 03:19:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:40.752 03:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.752 00:05:40.752 real 0m15.315s 00:05:40.752 user 0m42.469s 00:05:40.752 sys 0m3.261s 00:05:40.752 03:19:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.752 03:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.752 ************************************ 00:05:40.752 END TEST nvmf_example 00:05:40.752 ************************************ 00:05:40.752 03:19:18 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:40.752 03:19:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:40.752 03:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.752 03:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.752 ************************************ 00:05:40.752 START TEST nvmf_filesystem 00:05:40.752 ************************************ 00:05:40.752 03:19:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:41.014 * Looking for test storage... 00:05:41.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.014 03:19:18 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:41.014 03:19:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:41.014 03:19:18 -- common/autotest_common.sh@34 -- # set -e 00:05:41.014 03:19:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:41.014 03:19:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:41.014 03:19:18 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:41.014 03:19:18 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:41.014 03:19:18 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:41.014 03:19:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:41.014 03:19:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:41.014 03:19:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:41.014 03:19:18 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:41.014 03:19:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:41.014 03:19:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:41.014 03:19:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:41.014 03:19:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:41.014 03:19:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:41.014 03:19:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:41.014 03:19:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:41.014 03:19:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:41.014 03:19:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:41.014 03:19:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:41.014 03:19:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:41.014 03:19:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:41.014 03:19:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:41.014 03:19:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:41.014 03:19:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:41.014 03:19:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:41.014 03:19:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:41.014 03:19:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:41.014 03:19:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:41.014 03:19:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:41.014 03:19:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:41.014 03:19:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:41.014 03:19:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:41.014 03:19:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:41.014 03:19:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:41.014 03:19:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:41.014 03:19:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:41.014 03:19:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:41.014 03:19:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:41.014 03:19:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:41.014 03:19:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:41.014 03:19:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:41.014 03:19:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:41.014 03:19:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:41.014 03:19:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:41.014 03:19:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:41.014 03:19:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:41.014 03:19:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:41.014 03:19:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:41.014 03:19:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:41.014 03:19:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:41.014 03:19:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:41.014 03:19:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:41.014 03:19:18 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:05:41.014 03:19:18 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:05:41.014 03:19:18 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:05:41.014 03:19:18 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:05:41.014 03:19:18 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:05:41.014 03:19:18 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:05:41.014 03:19:18 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:05:41.014 03:19:18 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:05:41.014 03:19:18 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:05:41.014 03:19:18 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:05:41.014 03:19:18 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:05:41.014 03:19:18 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:05:41.014 03:19:18 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:05:41.014 03:19:18 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:41.014 03:19:18 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:05:41.014 03:19:18 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:05:41.014 03:19:18 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:05:41.014 03:19:18 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:05:41.014 03:19:18 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:05:41.014 03:19:18 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:05:41.014 03:19:18 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:05:41.014 03:19:18 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:05:41.014 03:19:18 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:05:41.014 03:19:18 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:05:41.014 03:19:18 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:05:41.014 03:19:18 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:41.014 03:19:18 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:05:41.014 03:19:18 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:05:41.014 03:19:18 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:41.014 03:19:18 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:41.014 03:19:18 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:41.014 03:19:18 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:41.014 03:19:18 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.014 03:19:18 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.014 03:19:18 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.014 03:19:18 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.014 03:19:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:41.014 03:19:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:41.014 03:19:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:41.014 03:19:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:41.014 03:19:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:41.014 03:19:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:41.014 03:19:18 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:41.014 03:19:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:41.014 #define SPDK_CONFIG_H 00:05:41.014 #define SPDK_CONFIG_APPS 1 00:05:41.014 #define SPDK_CONFIG_ARCH native 00:05:41.014 #undef SPDK_CONFIG_ASAN 00:05:41.014 #undef SPDK_CONFIG_AVAHI 00:05:41.014 #undef SPDK_CONFIG_CET 00:05:41.014 #define SPDK_CONFIG_COVERAGE 1 00:05:41.014 #define SPDK_CONFIG_CROSS_PREFIX 00:05:41.014 #undef SPDK_CONFIG_CRYPTO 00:05:41.014 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:41.014 #undef SPDK_CONFIG_CUSTOMOCF 00:05:41.015 #undef SPDK_CONFIG_DAOS 00:05:41.015 #define SPDK_CONFIG_DAOS_DIR 00:05:41.015 #define SPDK_CONFIG_DEBUG 1 00:05:41.015 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:41.015 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:41.015 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:41.015 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:41.015 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:41.015 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:41.015 #define SPDK_CONFIG_EXAMPLES 1 00:05:41.015 #undef SPDK_CONFIG_FC 00:05:41.015 #define SPDK_CONFIG_FC_PATH 00:05:41.015 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:41.015 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:41.015 #undef SPDK_CONFIG_FUSE 00:05:41.015 #undef SPDK_CONFIG_FUZZER 00:05:41.015 #define SPDK_CONFIG_FUZZER_LIB 00:05:41.015 #undef SPDK_CONFIG_GOLANG 00:05:41.015 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:41.015 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:41.015 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:41.015 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:41.015 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:41.015 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:41.015 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:41.015 #define SPDK_CONFIG_IDXD 1 00:05:41.015 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:41.015 #undef SPDK_CONFIG_IPSEC_MB 00:05:41.015 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:41.015 #define SPDK_CONFIG_ISAL 1 00:05:41.015 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:41.015 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:41.015 #define SPDK_CONFIG_LIBDIR 00:05:41.015 #undef SPDK_CONFIG_LTO 00:05:41.015 #define SPDK_CONFIG_MAX_LCORES 00:05:41.015 #define SPDK_CONFIG_NVME_CUSE 1 00:05:41.015 #undef SPDK_CONFIG_OCF 00:05:41.015 #define SPDK_CONFIG_OCF_PATH 00:05:41.015 #define SPDK_CONFIG_OPENSSL_PATH 00:05:41.015 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:41.015 #define SPDK_CONFIG_PGO_DIR 00:05:41.015 #undef SPDK_CONFIG_PGO_USE 00:05:41.015 #define SPDK_CONFIG_PREFIX /usr/local 00:05:41.015 #undef SPDK_CONFIG_RAID5F 00:05:41.015 #undef SPDK_CONFIG_RBD 00:05:41.015 #define SPDK_CONFIG_RDMA 1 00:05:41.015 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:41.015 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:41.015 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:41.015 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:41.015 #define SPDK_CONFIG_SHARED 1 00:05:41.015 #undef SPDK_CONFIG_SMA 00:05:41.015 #define SPDK_CONFIG_TESTS 1 00:05:41.015 #undef SPDK_CONFIG_TSAN 00:05:41.015 #define SPDK_CONFIG_UBLK 1 00:05:41.015 #define SPDK_CONFIG_UBSAN 1 00:05:41.015 #undef SPDK_CONFIG_UNIT_TESTS 00:05:41.015 #undef SPDK_CONFIG_URING 00:05:41.015 #define SPDK_CONFIG_URING_PATH 00:05:41.015 #undef SPDK_CONFIG_URING_ZNS 00:05:41.015 #undef SPDK_CONFIG_USDT 00:05:41.015 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:41.015 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:41.015 #define SPDK_CONFIG_VFIO_USER 1 00:05:41.015 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:41.015 #define SPDK_CONFIG_VHOST 1 00:05:41.015 #define SPDK_CONFIG_VIRTIO 1 00:05:41.015 #undef SPDK_CONFIG_VTUNE 00:05:41.015 #define SPDK_CONFIG_VTUNE_DIR 00:05:41.015 #define SPDK_CONFIG_WERROR 1 00:05:41.015 #define SPDK_CONFIG_WPDK_DIR 00:05:41.015 #undef SPDK_CONFIG_XNVME 00:05:41.015 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:41.015 03:19:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:41.015 03:19:18 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.015 03:19:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.015 03:19:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.015 03:19:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.015 03:19:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.015 03:19:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.015 03:19:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.015 03:19:18 -- paths/export.sh@5 -- # export PATH 00:05:41.015 03:19:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.015 03:19:18 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:41.015 03:19:18 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:41.015 03:19:18 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:41.015 03:19:18 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:41.015 03:19:18 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:41.015 03:19:18 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.015 03:19:18 -- pm/common@67 -- # TEST_TAG=N/A 00:05:41.015 03:19:18 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:41.015 03:19:18 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:41.015 03:19:18 -- pm/common@71 -- # uname -s 00:05:41.015 03:19:18 -- pm/common@71 -- # PM_OS=Linux 00:05:41.015 03:19:18 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:41.015 03:19:18 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:05:41.015 03:19:18 -- pm/common@76 -- # [[ Linux == Linux ]] 00:05:41.015 03:19:18 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:05:41.015 03:19:18 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:05:41.015 03:19:18 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:41.015 03:19:18 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:41.015 03:19:18 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:05:41.015 03:19:18 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:05:41.015 03:19:18 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:41.015 03:19:18 -- common/autotest_common.sh@57 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:41.015 03:19:18 -- common/autotest_common.sh@61 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:41.015 03:19:18 -- common/autotest_common.sh@63 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:41.015 03:19:18 -- common/autotest_common.sh@65 -- # : 1 00:05:41.015 03:19:18 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:41.015 03:19:18 -- common/autotest_common.sh@67 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:41.015 03:19:18 -- common/autotest_common.sh@69 -- # : 00:05:41.015 03:19:18 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:41.015 03:19:18 -- common/autotest_common.sh@71 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:41.015 03:19:18 -- common/autotest_common.sh@73 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:41.015 03:19:18 -- common/autotest_common.sh@75 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:41.015 03:19:18 -- common/autotest_common.sh@77 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:41.015 03:19:18 -- common/autotest_common.sh@79 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:41.015 03:19:18 -- common/autotest_common.sh@81 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:41.015 03:19:18 -- common/autotest_common.sh@83 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:41.015 03:19:18 -- common/autotest_common.sh@85 -- # : 1 00:05:41.015 03:19:18 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:41.015 03:19:18 -- common/autotest_common.sh@87 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:41.015 03:19:18 -- common/autotest_common.sh@89 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:41.015 03:19:18 -- common/autotest_common.sh@91 -- # : 1 00:05:41.015 03:19:18 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:41.015 03:19:18 -- common/autotest_common.sh@93 -- # : 1 00:05:41.015 03:19:18 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:41.015 03:19:18 -- common/autotest_common.sh@95 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:41.015 03:19:18 -- common/autotest_common.sh@97 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:41.015 03:19:18 -- common/autotest_common.sh@99 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:41.015 03:19:18 -- common/autotest_common.sh@101 -- # : tcp 00:05:41.015 03:19:18 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:41.015 03:19:18 -- common/autotest_common.sh@103 -- # : 0 00:05:41.015 03:19:18 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:41.015 03:19:18 -- common/autotest_common.sh@105 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:41.016 03:19:18 -- common/autotest_common.sh@107 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:41.016 03:19:18 -- common/autotest_common.sh@109 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:41.016 03:19:18 -- common/autotest_common.sh@111 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:41.016 03:19:18 -- common/autotest_common.sh@113 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:41.016 03:19:18 -- common/autotest_common.sh@115 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:41.016 03:19:18 -- common/autotest_common.sh@117 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:41.016 03:19:18 -- common/autotest_common.sh@119 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:41.016 03:19:18 -- common/autotest_common.sh@121 -- # : 1 00:05:41.016 03:19:18 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:41.016 03:19:18 -- common/autotest_common.sh@123 -- # : 00:05:41.016 03:19:18 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:41.016 03:19:18 -- common/autotest_common.sh@125 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:41.016 03:19:18 -- common/autotest_common.sh@127 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:41.016 03:19:18 -- common/autotest_common.sh@129 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:41.016 03:19:18 -- common/autotest_common.sh@131 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:41.016 03:19:18 -- common/autotest_common.sh@133 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:41.016 03:19:18 -- common/autotest_common.sh@135 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:41.016 03:19:18 -- common/autotest_common.sh@137 -- # : 00:05:41.016 03:19:18 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:41.016 03:19:18 -- common/autotest_common.sh@139 -- # : true 00:05:41.016 03:19:18 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:41.016 03:19:18 -- common/autotest_common.sh@141 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:41.016 03:19:18 -- common/autotest_common.sh@143 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:41.016 03:19:18 -- common/autotest_common.sh@145 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:41.016 03:19:18 -- common/autotest_common.sh@147 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:41.016 03:19:18 -- common/autotest_common.sh@149 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:41.016 03:19:18 -- common/autotest_common.sh@151 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:41.016 03:19:18 -- common/autotest_common.sh@153 -- # : e810 00:05:41.016 03:19:18 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:41.016 03:19:18 -- common/autotest_common.sh@155 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:41.016 03:19:18 -- common/autotest_common.sh@157 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:41.016 03:19:18 -- common/autotest_common.sh@159 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:41.016 03:19:18 -- common/autotest_common.sh@161 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:41.016 03:19:18 -- common/autotest_common.sh@163 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:41.016 03:19:18 -- common/autotest_common.sh@166 -- # : 00:05:41.016 03:19:18 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:41.016 03:19:18 -- common/autotest_common.sh@168 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:41.016 03:19:18 -- common/autotest_common.sh@170 -- # : 0 00:05:41.016 03:19:18 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:41.016 03:19:18 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.016 03:19:18 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:41.016 03:19:18 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:41.016 03:19:18 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.016 03:19:18 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.016 03:19:18 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:41.016 03:19:18 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:41.016 03:19:18 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:41.016 03:19:18 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:41.016 03:19:18 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:41.016 03:19:18 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:41.016 03:19:18 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:41.016 03:19:18 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:41.016 03:19:18 -- common/autotest_common.sh@199 -- # cat 00:05:41.016 03:19:18 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:05:41.016 03:19:18 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:41.016 03:19:18 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:41.016 03:19:18 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:41.016 03:19:18 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:41.016 03:19:18 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:05:41.016 03:19:18 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:05:41.016 03:19:18 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.016 03:19:18 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.016 03:19:18 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.016 03:19:18 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.016 03:19:18 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:41.016 03:19:18 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:41.016 03:19:18 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:41.016 03:19:18 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:41.016 03:19:18 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:41.016 03:19:18 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:41.016 03:19:18 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:41.016 03:19:18 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:41.016 03:19:18 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:05:41.016 03:19:18 -- common/autotest_common.sh@252 -- # export valgrind= 00:05:41.016 03:19:18 -- common/autotest_common.sh@252 -- # valgrind= 00:05:41.016 03:19:18 -- common/autotest_common.sh@258 -- # uname -s 00:05:41.016 03:19:18 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:05:41.016 03:19:18 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:05:41.016 03:19:18 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:05:41.016 03:19:18 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:05:41.016 03:19:18 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:41.016 03:19:18 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@268 -- # MAKE=make 00:05:41.017 03:19:18 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:05:41.017 03:19:18 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:05:41.017 03:19:18 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:05:41.017 03:19:18 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:05:41.017 03:19:18 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:05:41.017 03:19:18 -- common/autotest_common.sh@289 -- # for i in "$@" 00:05:41.017 03:19:18 -- common/autotest_common.sh@290 -- # case "$i" in 00:05:41.017 03:19:18 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:05:41.017 03:19:18 -- common/autotest_common.sh@307 -- # [[ -z 150802 ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@307 -- # kill -0 150802 00:05:41.017 03:19:18 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:05:41.017 03:19:18 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:05:41.017 03:19:18 -- common/autotest_common.sh@320 -- # local mount target_dir 00:05:41.017 03:19:18 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:05:41.017 03:19:18 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:05:41.017 03:19:18 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:05:41.017 03:19:18 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:05:41.017 03:19:18 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.a25R96 00:05:41.017 03:19:18 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:41.017 03:19:18 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.a25R96/tests/target /tmp/spdk.a25R96 00:05:41.017 03:19:18 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@316 -- # df -T 00:05:41.017 03:19:18 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=996749312 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287680512 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=48038535168 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994717184 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=13956182016 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=30941720576 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=55635968 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390178816 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=8765440 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996627456 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997360640 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=733184 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:05:41.017 03:19:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:05:41.017 03:19:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:05:41.017 03:19:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:41.017 03:19:18 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:05:41.017 * Looking for test storage... 00:05:41.017 03:19:18 -- common/autotest_common.sh@357 -- # local target_space new_size 00:05:41.017 03:19:18 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:05:41.017 03:19:18 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.017 03:19:18 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:41.017 03:19:18 -- common/autotest_common.sh@361 -- # mount=/ 00:05:41.017 03:19:18 -- common/autotest_common.sh@363 -- # target_space=48038535168 00:05:41.017 03:19:18 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:05:41.017 03:19:18 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:05:41.017 03:19:18 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@370 -- # new_size=16170774528 00:05:41.017 03:19:18 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:41.017 03:19:18 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.017 03:19:18 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.017 03:19:18 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.017 03:19:18 -- common/autotest_common.sh@378 -- # return 0 00:05:41.017 03:19:18 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:05:41.017 03:19:18 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:05:41.017 03:19:18 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:41.017 03:19:18 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:41.017 03:19:18 -- common/autotest_common.sh@1673 -- # true 00:05:41.017 03:19:18 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:41.017 03:19:18 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:41.017 03:19:18 -- common/autotest_common.sh@27 -- # exec 00:05:41.017 03:19:18 -- common/autotest_common.sh@29 -- # exec 00:05:41.017 03:19:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:41.017 03:19:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:41.017 03:19:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:41.017 03:19:18 -- common/autotest_common.sh@18 -- # set -x 00:05:41.017 03:19:18 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.017 03:19:18 -- nvmf/common.sh@7 -- # uname -s 00:05:41.017 03:19:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.017 03:19:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.017 03:19:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.017 03:19:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.017 03:19:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.017 03:19:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.017 03:19:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.017 03:19:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.017 03:19:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.017 03:19:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.017 03:19:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.017 03:19:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.017 03:19:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.017 03:19:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.017 03:19:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.017 03:19:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.017 03:19:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.017 03:19:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.017 03:19:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.017 03:19:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.017 03:19:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.018 03:19:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.018 03:19:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.018 03:19:18 -- paths/export.sh@5 -- # export PATH 00:05:41.018 03:19:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.018 03:19:18 -- nvmf/common.sh@47 -- # : 0 00:05:41.018 03:19:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.018 03:19:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.018 03:19:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.018 03:19:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.018 03:19:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.018 03:19:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.018 03:19:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.018 03:19:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.018 03:19:18 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:41.018 03:19:18 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:41.018 03:19:18 -- target/filesystem.sh@15 -- # nvmftestinit 00:05:41.018 03:19:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:41.018 03:19:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:41.018 03:19:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:41.018 03:19:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:41.018 03:19:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:41.018 03:19:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.018 03:19:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:41.018 03:19:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.018 03:19:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:41.018 03:19:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:41.018 03:19:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:41.018 03:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:43.591 03:19:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:43.591 03:19:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:43.591 03:19:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:43.591 03:19:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:43.591 03:19:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:43.591 03:19:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:43.591 03:19:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:43.591 03:19:20 -- nvmf/common.sh@295 -- # net_devs=() 00:05:43.591 03:19:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:43.591 03:19:20 -- nvmf/common.sh@296 -- # e810=() 00:05:43.591 03:19:20 -- nvmf/common.sh@296 -- # local -ga e810 00:05:43.591 03:19:20 -- nvmf/common.sh@297 -- # x722=() 00:05:43.591 03:19:20 -- nvmf/common.sh@297 -- # local -ga x722 00:05:43.591 03:19:20 -- nvmf/common.sh@298 -- # mlx=() 00:05:43.591 03:19:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:43.591 03:19:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:43.591 03:19:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:43.591 03:19:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:43.591 03:19:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:43.592 03:19:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:43.592 03:19:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:43.592 03:19:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:43.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:43.592 03:19:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:43.592 03:19:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:43.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:43.592 03:19:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:43.592 03:19:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:43.592 03:19:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.592 03:19:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:43.592 03:19:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.592 03:19:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:43.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:43.592 03:19:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.592 03:19:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:43.592 03:19:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.592 03:19:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:43.592 03:19:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.592 03:19:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:43.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:43.592 03:19:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.592 03:19:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:43.592 03:19:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:43.592 03:19:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:43.592 03:19:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:43.592 03:19:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:43.592 03:19:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:43.592 03:19:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:43.592 03:19:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:43.592 03:19:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:43.592 03:19:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:43.592 03:19:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:43.592 03:19:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:43.592 03:19:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:43.592 03:19:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:43.592 03:19:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:43.592 03:19:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:43.592 03:19:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:43.592 03:19:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:43.592 03:19:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:43.592 03:19:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:43.592 03:19:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:43.592 03:19:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:43.592 03:19:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:43.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:43.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:05:43.592 00:05:43.592 --- 10.0.0.2 ping statistics --- 00:05:43.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.592 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:05:43.592 03:19:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:43.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:43.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:05:43.592 00:05:43.592 --- 10.0.0.1 ping statistics --- 00:05:43.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.592 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:05:43.592 03:19:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:43.592 03:19:20 -- nvmf/common.sh@411 -- # return 0 00:05:43.592 03:19:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:43.592 03:19:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:43.592 03:19:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:43.592 03:19:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:43.592 03:19:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:43.592 03:19:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:43.592 03:19:20 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:43.592 03:19:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:43.592 03:19:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.592 03:19:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.592 ************************************ 00:05:43.592 START TEST nvmf_filesystem_no_in_capsule 00:05:43.592 ************************************ 00:05:43.592 03:19:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:05:43.592 03:19:20 -- target/filesystem.sh@47 -- # in_capsule=0 00:05:43.592 03:19:20 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:43.592 03:19:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:05:43.592 03:19:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:43.592 03:19:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.592 03:19:20 -- nvmf/common.sh@470 -- # nvmfpid=152441 00:05:43.592 03:19:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:43.592 03:19:20 -- nvmf/common.sh@471 -- # waitforlisten 152441 00:05:43.592 03:19:20 -- common/autotest_common.sh@817 -- # '[' -z 152441 ']' 00:05:43.592 03:19:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.592 03:19:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.592 03:19:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.592 03:19:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.592 03:19:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.592 [2024-04-19 03:19:20.850051] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:43.592 [2024-04-19 03:19:20.850145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:43.592 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.592 [2024-04-19 03:19:20.919666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.592 [2024-04-19 03:19:21.035049] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:43.592 [2024-04-19 03:19:21.035113] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:43.592 [2024-04-19 03:19:21.035141] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.592 [2024-04-19 03:19:21.035154] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.592 [2024-04-19 03:19:21.035175] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:43.592 [2024-04-19 03:19:21.035262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.592 [2024-04-19 03:19:21.035321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.592 [2024-04-19 03:19:21.035323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.592 [2024-04-19 03:19:21.035292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.850 03:19:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.850 03:19:21 -- common/autotest_common.sh@850 -- # return 0 00:05:43.850 03:19:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:05:43.850 03:19:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:43.850 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.850 03:19:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:43.850 03:19:21 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:43.850 03:19:21 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:43.850 03:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.850 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.850 [2024-04-19 03:19:21.196193] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.850 03:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.850 03:19:21 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:43.850 03:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.850 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.850 Malloc1 00:05:43.850 03:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.850 03:19:21 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:43.850 03:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.850 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.850 03:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.851 03:19:21 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:43.851 03:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.851 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.851 03:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.851 03:19:21 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:43.851 03:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.851 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.851 [2024-04-19 03:19:21.384024] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:43.851 03:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.851 03:19:21 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:43.851 03:19:21 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:05:43.851 03:19:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:05:43.851 03:19:21 -- common/autotest_common.sh@1366 -- # local bs 00:05:43.851 03:19:21 -- common/autotest_common.sh@1367 -- # local nb 00:05:43.851 03:19:21 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:43.851 03:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.851 03:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.851 03:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.851 03:19:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:05:43.851 { 00:05:43.851 "name": "Malloc1", 00:05:43.851 "aliases": [ 00:05:43.851 "29011878-638d-46cd-aace-35334f230a51" 00:05:43.851 ], 00:05:43.851 "product_name": "Malloc disk", 00:05:43.851 "block_size": 512, 00:05:43.851 "num_blocks": 1048576, 00:05:43.851 "uuid": "29011878-638d-46cd-aace-35334f230a51", 00:05:43.851 "assigned_rate_limits": { 00:05:43.851 "rw_ios_per_sec": 0, 00:05:43.851 "rw_mbytes_per_sec": 0, 00:05:43.851 "r_mbytes_per_sec": 0, 00:05:43.851 "w_mbytes_per_sec": 0 00:05:43.851 }, 00:05:43.851 "claimed": true, 00:05:43.851 "claim_type": "exclusive_write", 00:05:43.851 "zoned": false, 00:05:43.851 "supported_io_types": { 00:05:43.851 "read": true, 00:05:43.851 "write": true, 00:05:43.851 "unmap": true, 00:05:43.851 "write_zeroes": true, 00:05:43.851 "flush": true, 00:05:43.851 "reset": true, 00:05:43.851 "compare": false, 00:05:43.851 "compare_and_write": false, 00:05:43.851 "abort": true, 00:05:43.851 "nvme_admin": false, 00:05:43.851 "nvme_io": false 00:05:43.851 }, 00:05:43.851 "memory_domains": [ 00:05:43.851 { 00:05:43.851 "dma_device_id": "system", 00:05:43.851 "dma_device_type": 1 00:05:43.851 }, 00:05:43.851 { 00:05:43.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.851 "dma_device_type": 2 00:05:43.851 } 00:05:43.851 ], 00:05:43.851 "driver_specific": {} 00:05:43.851 } 00:05:43.851 ]' 00:05:43.851 03:19:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:05:44.108 03:19:21 -- common/autotest_common.sh@1369 -- # bs=512 00:05:44.108 03:19:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:05:44.108 03:19:21 -- common/autotest_common.sh@1370 -- # nb=1048576 00:05:44.108 03:19:21 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:05:44.108 03:19:21 -- common/autotest_common.sh@1374 -- # echo 512 00:05:44.108 03:19:21 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:44.108 03:19:21 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:44.673 03:19:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:44.673 03:19:22 -- common/autotest_common.sh@1184 -- # local i=0 00:05:44.673 03:19:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:05:44.673 03:19:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:05:44.673 03:19:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:05:47.200 03:19:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:05:47.200 03:19:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:05:47.200 03:19:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:05:47.200 03:19:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:05:47.200 03:19:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:05:47.200 03:19:24 -- common/autotest_common.sh@1194 -- # return 0 00:05:47.200 03:19:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:47.200 03:19:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:47.200 03:19:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:47.200 03:19:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:47.200 03:19:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:47.200 03:19:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:47.200 03:19:24 -- setup/common.sh@80 -- # echo 536870912 00:05:47.200 03:19:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:47.200 03:19:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:47.200 03:19:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:47.200 03:19:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:47.200 03:19:24 -- target/filesystem.sh@69 -- # partprobe 00:05:47.458 03:19:24 -- target/filesystem.sh@70 -- # sleep 1 00:05:48.391 03:19:25 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:48.391 03:19:25 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:48.391 03:19:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:48.391 03:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.391 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.391 ************************************ 00:05:48.391 START TEST filesystem_ext4 00:05:48.391 ************************************ 00:05:48.391 03:19:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:48.391 03:19:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:48.391 03:19:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:48.391 03:19:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:48.391 03:19:25 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:05:48.391 03:19:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:48.391 03:19:25 -- common/autotest_common.sh@914 -- # local i=0 00:05:48.391 03:19:25 -- common/autotest_common.sh@915 -- # local force 00:05:48.391 03:19:25 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:05:48.391 03:19:25 -- common/autotest_common.sh@918 -- # force=-F 00:05:48.391 03:19:25 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:48.391 mke2fs 1.46.5 (30-Dec-2021) 00:05:48.649 Discarding device blocks: 0/522240 done 00:05:48.649 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:48.649 Filesystem UUID: 63997775-bbab-41f9-b941-e198bf8d4180 00:05:48.649 Superblock backups stored on blocks: 00:05:48.649 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:48.649 00:05:48.649 Allocating group tables: 0/64 done 00:05:48.649 Writing inode tables: 0/64 1/64 done 00:05:48.649 Creating journal (8192 blocks): done 00:05:48.649 Writing superblocks and filesystem accounting information: 0/64 done 00:05:48.649 00:05:48.649 03:19:26 -- common/autotest_common.sh@931 -- # return 0 00:05:48.649 03:19:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:48.907 03:19:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:48.907 03:19:26 -- target/filesystem.sh@25 -- # sync 00:05:48.907 03:19:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:48.907 03:19:26 -- target/filesystem.sh@27 -- # sync 00:05:48.907 03:19:26 -- target/filesystem.sh@29 -- # i=0 00:05:48.907 03:19:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:48.907 03:19:26 -- target/filesystem.sh@37 -- # kill -0 152441 00:05:48.907 03:19:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:48.907 03:19:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:49.165 03:19:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:49.165 03:19:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:49.165 00:05:49.165 real 0m0.586s 00:05:49.165 user 0m0.018s 00:05:49.165 sys 0m0.035s 00:05:49.165 03:19:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.165 03:19:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.165 ************************************ 00:05:49.165 END TEST filesystem_ext4 00:05:49.165 ************************************ 00:05:49.165 03:19:26 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:49.165 03:19:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:49.165 03:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.165 03:19:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.165 ************************************ 00:05:49.165 START TEST filesystem_btrfs 00:05:49.165 ************************************ 00:05:49.165 03:19:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:49.165 03:19:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:49.165 03:19:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:49.165 03:19:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:49.165 03:19:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:05:49.165 03:19:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:49.165 03:19:26 -- common/autotest_common.sh@914 -- # local i=0 00:05:49.165 03:19:26 -- common/autotest_common.sh@915 -- # local force 00:05:49.165 03:19:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:05:49.165 03:19:26 -- common/autotest_common.sh@920 -- # force=-f 00:05:49.165 03:19:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:49.423 btrfs-progs v6.6.2 00:05:49.423 See https://btrfs.readthedocs.io for more information. 00:05:49.423 00:05:49.423 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:49.423 NOTE: several default settings have changed in version 5.15, please make sure 00:05:49.423 this does not affect your deployments: 00:05:49.423 - DUP for metadata (-m dup) 00:05:49.423 - enabled no-holes (-O no-holes) 00:05:49.423 - enabled free-space-tree (-R free-space-tree) 00:05:49.423 00:05:49.423 Label: (null) 00:05:49.423 UUID: 1ad25e95-a243-4e77-b52b-9770a145e9c3 00:05:49.423 Node size: 16384 00:05:49.423 Sector size: 4096 00:05:49.423 Filesystem size: 510.00MiB 00:05:49.423 Block group profiles: 00:05:49.423 Data: single 8.00MiB 00:05:49.423 Metadata: DUP 32.00MiB 00:05:49.423 System: DUP 8.00MiB 00:05:49.423 SSD detected: yes 00:05:49.423 Zoned device: no 00:05:49.423 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:49.423 Runtime features: free-space-tree 00:05:49.423 Checksum: crc32c 00:05:49.423 Number of devices: 1 00:05:49.423 Devices: 00:05:49.423 ID SIZE PATH 00:05:49.423 1 510.00MiB /dev/nvme0n1p1 00:05:49.423 00:05:49.423 03:19:26 -- common/autotest_common.sh@931 -- # return 0 00:05:49.423 03:19:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:50.356 03:19:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:50.356 03:19:27 -- target/filesystem.sh@25 -- # sync 00:05:50.356 03:19:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:50.356 03:19:27 -- target/filesystem.sh@27 -- # sync 00:05:50.356 03:19:27 -- target/filesystem.sh@29 -- # i=0 00:05:50.356 03:19:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:50.356 03:19:27 -- target/filesystem.sh@37 -- # kill -0 152441 00:05:50.356 03:19:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:50.356 03:19:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:50.356 03:19:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:50.356 03:19:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:50.356 00:05:50.356 real 0m1.180s 00:05:50.356 user 0m0.010s 00:05:50.356 sys 0m0.048s 00:05:50.356 03:19:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.356 03:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.356 ************************************ 00:05:50.356 END TEST filesystem_btrfs 00:05:50.356 ************************************ 00:05:50.356 03:19:27 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:50.356 03:19:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:50.356 03:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.356 03:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.356 ************************************ 00:05:50.356 START TEST filesystem_xfs 00:05:50.356 ************************************ 00:05:50.356 03:19:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:05:50.356 03:19:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:05:50.356 03:19:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:50.356 03:19:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:50.356 03:19:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:05:50.356 03:19:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:50.356 03:19:27 -- common/autotest_common.sh@914 -- # local i=0 00:05:50.356 03:19:27 -- common/autotest_common.sh@915 -- # local force 00:05:50.356 03:19:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:05:50.356 03:19:27 -- common/autotest_common.sh@920 -- # force=-f 00:05:50.356 03:19:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:50.615 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:50.615 = sectsz=512 attr=2, projid32bit=1 00:05:50.615 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:50.615 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:50.615 data = bsize=4096 blocks=130560, imaxpct=25 00:05:50.615 = sunit=0 swidth=0 blks 00:05:50.615 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:50.615 log =internal log bsize=4096 blocks=16384, version=2 00:05:50.615 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:50.615 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:51.548 Discarding blocks...Done. 00:05:51.548 03:19:28 -- common/autotest_common.sh@931 -- # return 0 00:05:51.548 03:19:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:53.448 03:19:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:53.448 03:19:30 -- target/filesystem.sh@25 -- # sync 00:05:53.448 03:19:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:53.448 03:19:30 -- target/filesystem.sh@27 -- # sync 00:05:53.448 03:19:30 -- target/filesystem.sh@29 -- # i=0 00:05:53.448 03:19:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:53.448 03:19:30 -- target/filesystem.sh@37 -- # kill -0 152441 00:05:53.448 03:19:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:53.448 03:19:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:53.448 03:19:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:53.448 03:19:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:53.448 00:05:53.448 real 0m2.887s 00:05:53.448 user 0m0.014s 00:05:53.448 sys 0m0.039s 00:05:53.448 03:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.448 03:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.448 ************************************ 00:05:53.448 END TEST filesystem_xfs 00:05:53.448 ************************************ 00:05:53.448 03:19:30 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:53.707 03:19:31 -- target/filesystem.sh@93 -- # sync 00:05:53.707 03:19:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:53.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:53.707 03:19:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:53.707 03:19:31 -- common/autotest_common.sh@1205 -- # local i=0 00:05:53.707 03:19:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:05:53.707 03:19:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:53.707 03:19:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:05:53.707 03:19:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:53.707 03:19:31 -- common/autotest_common.sh@1217 -- # return 0 00:05:53.707 03:19:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:53.707 03:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.707 03:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:53.707 03:19:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.707 03:19:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:53.707 03:19:31 -- target/filesystem.sh@101 -- # killprocess 152441 00:05:53.707 03:19:31 -- common/autotest_common.sh@936 -- # '[' -z 152441 ']' 00:05:53.707 03:19:31 -- common/autotest_common.sh@940 -- # kill -0 152441 00:05:53.707 03:19:31 -- common/autotest_common.sh@941 -- # uname 00:05:53.707 03:19:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.707 03:19:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 152441 00:05:53.707 03:19:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.707 03:19:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.707 03:19:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 152441' 00:05:53.707 killing process with pid 152441 00:05:53.707 03:19:31 -- common/autotest_common.sh@955 -- # kill 152441 00:05:53.707 03:19:31 -- common/autotest_common.sh@960 -- # wait 152441 00:05:54.313 03:19:31 -- target/filesystem.sh@102 -- # nvmfpid= 00:05:54.313 00:05:54.313 real 0m10.886s 00:05:54.313 user 0m41.476s 00:05:54.313 sys 0m1.796s 00:05:54.313 03:19:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.313 03:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.313 ************************************ 00:05:54.313 END TEST nvmf_filesystem_no_in_capsule 00:05:54.313 ************************************ 00:05:54.313 03:19:31 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:54.313 03:19:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:54.313 03:19:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.313 03:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.313 ************************************ 00:05:54.313 START TEST nvmf_filesystem_in_capsule 00:05:54.313 ************************************ 00:05:54.313 03:19:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:05:54.313 03:19:31 -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:54.313 03:19:31 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:54.313 03:19:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:05:54.313 03:19:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:54.313 03:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.313 03:19:31 -- nvmf/common.sh@470 -- # nvmfpid=154016 00:05:54.313 03:19:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:54.313 03:19:31 -- nvmf/common.sh@471 -- # waitforlisten 154016 00:05:54.313 03:19:31 -- common/autotest_common.sh@817 -- # '[' -z 154016 ']' 00:05:54.313 03:19:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.313 03:19:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.313 03:19:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.313 03:19:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.313 03:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.313 [2024-04-19 03:19:31.870495] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:05:54.313 [2024-04-19 03:19:31.870570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.572 [2024-04-19 03:19:31.941085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.572 [2024-04-19 03:19:32.061256] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.572 [2024-04-19 03:19:32.061333] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.572 [2024-04-19 03:19:32.061349] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.572 [2024-04-19 03:19:32.061363] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.572 [2024-04-19 03:19:32.061375] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.572 [2024-04-19 03:19:32.061460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.572 [2024-04-19 03:19:32.061517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.572 [2024-04-19 03:19:32.061551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.572 [2024-04-19 03:19:32.061555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.507 03:19:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.507 03:19:32 -- common/autotest_common.sh@850 -- # return 0 00:05:55.507 03:19:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:05:55.507 03:19:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:55.507 03:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 03:19:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:55.507 03:19:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:55.507 03:19:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:05:55.507 03:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.507 03:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 [2024-04-19 03:19:32.845558] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.507 03:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.507 03:19:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:55.507 03:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.507 03:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 Malloc1 00:05:55.507 03:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.507 03:19:33 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:55.507 03:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.507 03:19:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 03:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.507 03:19:33 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:55.507 03:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.507 03:19:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 03:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.507 03:19:33 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:55.507 03:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.507 03:19:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 [2024-04-19 03:19:33.031917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:55.507 03:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.507 03:19:33 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:55.507 03:19:33 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:05:55.507 03:19:33 -- common/autotest_common.sh@1365 -- # local bdev_info 00:05:55.507 03:19:33 -- common/autotest_common.sh@1366 -- # local bs 00:05:55.507 03:19:33 -- common/autotest_common.sh@1367 -- # local nb 00:05:55.507 03:19:33 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:55.507 03:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.507 03:19:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 03:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.507 03:19:33 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:05:55.507 { 00:05:55.507 "name": "Malloc1", 00:05:55.507 "aliases": [ 00:05:55.507 "29cc7c33-644e-43dc-8035-ca22cb5d53aa" 00:05:55.507 ], 00:05:55.507 "product_name": "Malloc disk", 00:05:55.507 "block_size": 512, 00:05:55.507 "num_blocks": 1048576, 00:05:55.507 "uuid": "29cc7c33-644e-43dc-8035-ca22cb5d53aa", 00:05:55.507 "assigned_rate_limits": { 00:05:55.507 "rw_ios_per_sec": 0, 00:05:55.507 "rw_mbytes_per_sec": 0, 00:05:55.507 "r_mbytes_per_sec": 0, 00:05:55.507 "w_mbytes_per_sec": 0 00:05:55.507 }, 00:05:55.507 "claimed": true, 00:05:55.507 "claim_type": "exclusive_write", 00:05:55.507 "zoned": false, 00:05:55.507 "supported_io_types": { 00:05:55.507 "read": true, 00:05:55.507 "write": true, 00:05:55.507 "unmap": true, 00:05:55.507 "write_zeroes": true, 00:05:55.507 "flush": true, 00:05:55.507 "reset": true, 00:05:55.507 "compare": false, 00:05:55.507 "compare_and_write": false, 00:05:55.507 "abort": true, 00:05:55.507 "nvme_admin": false, 00:05:55.507 "nvme_io": false 00:05:55.507 }, 00:05:55.507 "memory_domains": [ 00:05:55.507 { 00:05:55.507 "dma_device_id": "system", 00:05:55.507 "dma_device_type": 1 00:05:55.507 }, 00:05:55.507 { 00:05:55.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.507 "dma_device_type": 2 00:05:55.507 } 00:05:55.507 ], 00:05:55.507 "driver_specific": {} 00:05:55.507 } 00:05:55.507 ]' 00:05:55.507 03:19:33 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:05:55.773 03:19:33 -- common/autotest_common.sh@1369 -- # bs=512 00:05:55.773 03:19:33 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:05:55.773 03:19:33 -- common/autotest_common.sh@1370 -- # nb=1048576 00:05:55.773 03:19:33 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:05:55.773 03:19:33 -- common/autotest_common.sh@1374 -- # echo 512 00:05:55.773 03:19:33 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:55.773 03:19:33 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:56.343 03:19:33 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:56.343 03:19:33 -- common/autotest_common.sh@1184 -- # local i=0 00:05:56.343 03:19:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:05:56.343 03:19:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:05:56.343 03:19:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:05:58.241 03:19:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:05:58.241 03:19:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:05:58.241 03:19:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:05:58.241 03:19:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:05:58.241 03:19:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:05:58.241 03:19:35 -- common/autotest_common.sh@1194 -- # return 0 00:05:58.241 03:19:35 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:58.241 03:19:35 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:58.241 03:19:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:58.241 03:19:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:58.241 03:19:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:58.241 03:19:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:58.241 03:19:35 -- setup/common.sh@80 -- # echo 536870912 00:05:58.241 03:19:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:58.241 03:19:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:58.241 03:19:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:58.241 03:19:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:58.806 03:19:36 -- target/filesystem.sh@69 -- # partprobe 00:05:59.738 03:19:37 -- target/filesystem.sh@70 -- # sleep 1 00:06:00.671 03:19:38 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:00.671 03:19:38 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:00.671 03:19:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:00.671 03:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.671 03:19:38 -- common/autotest_common.sh@10 -- # set +x 00:06:00.671 ************************************ 00:06:00.671 START TEST filesystem_in_capsule_ext4 00:06:00.671 ************************************ 00:06:00.671 03:19:38 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:00.671 03:19:38 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:00.671 03:19:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:00.671 03:19:38 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:00.671 03:19:38 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:00.671 03:19:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:00.671 03:19:38 -- common/autotest_common.sh@914 -- # local i=0 00:06:00.671 03:19:38 -- common/autotest_common.sh@915 -- # local force 00:06:00.671 03:19:38 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:00.671 03:19:38 -- common/autotest_common.sh@918 -- # force=-F 00:06:00.671 03:19:38 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:00.671 mke2fs 1.46.5 (30-Dec-2021) 00:06:00.671 Discarding device blocks: 0/522240 done 00:06:00.928 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:00.928 Filesystem UUID: 143e422f-4b8a-496a-a37d-9766cf2ce9a9 00:06:00.928 Superblock backups stored on blocks: 00:06:00.928 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:00.928 00:06:00.928 Allocating group tables: 0/64 done 00:06:00.928 Writing inode tables: 0/64 done 00:06:02.299 Creating journal (8192 blocks): done 00:06:02.556 Writing superblocks and filesystem accounting information: 0/64 done 00:06:02.556 00:06:02.556 03:19:39 -- common/autotest_common.sh@931 -- # return 0 00:06:02.556 03:19:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:02.556 03:19:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:02.557 03:19:40 -- target/filesystem.sh@25 -- # sync 00:06:02.557 03:19:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:02.557 03:19:40 -- target/filesystem.sh@27 -- # sync 00:06:02.557 03:19:40 -- target/filesystem.sh@29 -- # i=0 00:06:02.557 03:19:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:02.557 03:19:40 -- target/filesystem.sh@37 -- # kill -0 154016 00:06:02.557 03:19:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:02.557 03:19:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:02.557 03:19:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:02.557 03:19:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:02.814 00:06:02.815 real 0m2.002s 00:06:02.815 user 0m0.012s 00:06:02.815 sys 0m0.040s 00:06:02.815 03:19:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.815 03:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:02.815 ************************************ 00:06:02.815 END TEST filesystem_in_capsule_ext4 00:06:02.815 ************************************ 00:06:02.815 03:19:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:02.815 03:19:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:02.815 03:19:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.815 03:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:02.815 ************************************ 00:06:02.815 START TEST filesystem_in_capsule_btrfs 00:06:02.815 ************************************ 00:06:02.815 03:19:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:02.815 03:19:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:02.815 03:19:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:02.815 03:19:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:02.815 03:19:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:02.815 03:19:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:02.815 03:19:40 -- common/autotest_common.sh@914 -- # local i=0 00:06:02.815 03:19:40 -- common/autotest_common.sh@915 -- # local force 00:06:02.815 03:19:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:02.815 03:19:40 -- common/autotest_common.sh@920 -- # force=-f 00:06:02.815 03:19:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:03.379 btrfs-progs v6.6.2 00:06:03.379 See https://btrfs.readthedocs.io for more information. 00:06:03.379 00:06:03.379 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:03.379 NOTE: several default settings have changed in version 5.15, please make sure 00:06:03.379 this does not affect your deployments: 00:06:03.379 - DUP for metadata (-m dup) 00:06:03.379 - enabled no-holes (-O no-holes) 00:06:03.379 - enabled free-space-tree (-R free-space-tree) 00:06:03.379 00:06:03.379 Label: (null) 00:06:03.379 UUID: 8be716e0-7496-4fac-a7a2-cd77326ceb2e 00:06:03.379 Node size: 16384 00:06:03.379 Sector size: 4096 00:06:03.379 Filesystem size: 510.00MiB 00:06:03.379 Block group profiles: 00:06:03.379 Data: single 8.00MiB 00:06:03.379 Metadata: DUP 32.00MiB 00:06:03.379 System: DUP 8.00MiB 00:06:03.379 SSD detected: yes 00:06:03.379 Zoned device: no 00:06:03.379 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:03.379 Runtime features: free-space-tree 00:06:03.379 Checksum: crc32c 00:06:03.379 Number of devices: 1 00:06:03.379 Devices: 00:06:03.379 ID SIZE PATH 00:06:03.379 1 510.00MiB /dev/nvme0n1p1 00:06:03.379 00:06:03.379 03:19:40 -- common/autotest_common.sh@931 -- # return 0 00:06:03.379 03:19:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:03.636 03:19:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:03.636 03:19:40 -- target/filesystem.sh@25 -- # sync 00:06:03.636 03:19:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:03.636 03:19:40 -- target/filesystem.sh@27 -- # sync 00:06:03.636 03:19:40 -- target/filesystem.sh@29 -- # i=0 00:06:03.636 03:19:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:03.636 03:19:40 -- target/filesystem.sh@37 -- # kill -0 154016 00:06:03.636 03:19:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:03.636 03:19:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:03.636 03:19:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:03.636 03:19:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:03.636 00:06:03.636 real 0m0.740s 00:06:03.636 user 0m0.011s 00:06:03.636 sys 0m0.043s 00:06:03.636 03:19:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.636 03:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:03.636 ************************************ 00:06:03.636 END TEST filesystem_in_capsule_btrfs 00:06:03.636 ************************************ 00:06:03.636 03:19:41 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:03.636 03:19:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:03.636 03:19:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.636 03:19:41 -- common/autotest_common.sh@10 -- # set +x 00:06:03.636 ************************************ 00:06:03.636 START TEST filesystem_in_capsule_xfs 00:06:03.636 ************************************ 00:06:03.636 03:19:41 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:03.636 03:19:41 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:03.636 03:19:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:03.637 03:19:41 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:03.637 03:19:41 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:03.637 03:19:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:03.637 03:19:41 -- common/autotest_common.sh@914 -- # local i=0 00:06:03.637 03:19:41 -- common/autotest_common.sh@915 -- # local force 00:06:03.637 03:19:41 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:03.637 03:19:41 -- common/autotest_common.sh@920 -- # force=-f 00:06:03.637 03:19:41 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:03.895 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:03.895 = sectsz=512 attr=2, projid32bit=1 00:06:03.895 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:03.895 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:03.895 data = bsize=4096 blocks=130560, imaxpct=25 00:06:03.895 = sunit=0 swidth=0 blks 00:06:03.895 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:03.895 log =internal log bsize=4096 blocks=16384, version=2 00:06:03.895 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:03.895 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:04.842 Discarding blocks...Done. 00:06:04.842 03:19:42 -- common/autotest_common.sh@931 -- # return 0 00:06:04.842 03:19:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:06.736 03:19:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:06.736 03:19:44 -- target/filesystem.sh@25 -- # sync 00:06:06.736 03:19:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:06.736 03:19:44 -- target/filesystem.sh@27 -- # sync 00:06:06.736 03:19:44 -- target/filesystem.sh@29 -- # i=0 00:06:06.736 03:19:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:06.736 03:19:44 -- target/filesystem.sh@37 -- # kill -0 154016 00:06:06.736 03:19:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:06.736 03:19:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:06.994 03:19:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:06.994 03:19:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:06.994 00:06:06.994 real 0m3.216s 00:06:06.994 user 0m0.010s 00:06:06.994 sys 0m0.046s 00:06:06.994 03:19:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.994 03:19:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.994 ************************************ 00:06:06.994 END TEST filesystem_in_capsule_xfs 00:06:06.994 ************************************ 00:06:06.994 03:19:44 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:07.251 03:19:44 -- target/filesystem.sh@93 -- # sync 00:06:07.251 03:19:44 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:07.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:07.251 03:19:44 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:07.251 03:19:44 -- common/autotest_common.sh@1205 -- # local i=0 00:06:07.251 03:19:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:07.251 03:19:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:07.251 03:19:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:07.251 03:19:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:07.251 03:19:44 -- common/autotest_common.sh@1217 -- # return 0 00:06:07.251 03:19:44 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:07.251 03:19:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:07.251 03:19:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.251 03:19:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:07.251 03:19:44 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:07.251 03:19:44 -- target/filesystem.sh@101 -- # killprocess 154016 00:06:07.251 03:19:44 -- common/autotest_common.sh@936 -- # '[' -z 154016 ']' 00:06:07.251 03:19:44 -- common/autotest_common.sh@940 -- # kill -0 154016 00:06:07.251 03:19:44 -- common/autotest_common.sh@941 -- # uname 00:06:07.251 03:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.251 03:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 154016 00:06:07.251 03:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.251 03:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.251 03:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 154016' 00:06:07.251 killing process with pid 154016 00:06:07.251 03:19:44 -- common/autotest_common.sh@955 -- # kill 154016 00:06:07.251 03:19:44 -- common/autotest_common.sh@960 -- # wait 154016 00:06:07.817 03:19:45 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:07.817 00:06:07.817 real 0m13.403s 00:06:07.817 user 0m51.691s 00:06:07.817 sys 0m1.854s 00:06:07.817 03:19:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.817 03:19:45 -- common/autotest_common.sh@10 -- # set +x 00:06:07.817 ************************************ 00:06:07.817 END TEST nvmf_filesystem_in_capsule 00:06:07.817 ************************************ 00:06:07.817 03:19:45 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:07.817 03:19:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:07.817 03:19:45 -- nvmf/common.sh@117 -- # sync 00:06:07.817 03:19:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:07.817 03:19:45 -- nvmf/common.sh@120 -- # set +e 00:06:07.817 03:19:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:07.817 03:19:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:07.817 rmmod nvme_tcp 00:06:07.817 rmmod nvme_fabrics 00:06:07.817 rmmod nvme_keyring 00:06:07.817 03:19:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:07.817 03:19:45 -- nvmf/common.sh@124 -- # set -e 00:06:07.817 03:19:45 -- nvmf/common.sh@125 -- # return 0 00:06:07.818 03:19:45 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:07.818 03:19:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:07.818 03:19:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:07.818 03:19:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:07.818 03:19:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:07.818 03:19:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:07.818 03:19:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.818 03:19:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:07.818 03:19:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.355 03:19:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:10.355 00:06:10.355 real 0m29.072s 00:06:10.355 user 1m34.203s 00:06:10.355 sys 0m5.374s 00:06:10.355 03:19:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.355 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:06:10.355 ************************************ 00:06:10.355 END TEST nvmf_filesystem 00:06:10.355 ************************************ 00:06:10.355 03:19:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:10.355 03:19:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:10.355 03:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.355 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:06:10.355 ************************************ 00:06:10.355 START TEST nvmf_discovery 00:06:10.355 ************************************ 00:06:10.355 03:19:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:10.355 * Looking for test storage... 00:06:10.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.355 03:19:47 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.355 03:19:47 -- nvmf/common.sh@7 -- # uname -s 00:06:10.355 03:19:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.355 03:19:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.355 03:19:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.355 03:19:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.355 03:19:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.355 03:19:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.355 03:19:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.355 03:19:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.355 03:19:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.355 03:19:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.355 03:19:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:10.355 03:19:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:10.355 03:19:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.355 03:19:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.355 03:19:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.355 03:19:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.355 03:19:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.355 03:19:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.355 03:19:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.355 03:19:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.355 03:19:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.355 03:19:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.355 03:19:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.355 03:19:47 -- paths/export.sh@5 -- # export PATH 00:06:10.355 03:19:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.355 03:19:47 -- nvmf/common.sh@47 -- # : 0 00:06:10.355 03:19:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.355 03:19:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.355 03:19:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.355 03:19:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.355 03:19:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.355 03:19:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.355 03:19:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.355 03:19:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.355 03:19:47 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:10.355 03:19:47 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:10.355 03:19:47 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:10.355 03:19:47 -- target/discovery.sh@15 -- # hash nvme 00:06:10.355 03:19:47 -- target/discovery.sh@20 -- # nvmftestinit 00:06:10.355 03:19:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:10.355 03:19:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.355 03:19:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:10.355 03:19:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:10.355 03:19:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:10.355 03:19:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.355 03:19:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:10.355 03:19:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.355 03:19:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:10.355 03:19:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:10.355 03:19:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:10.355 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.257 03:19:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:12.257 03:19:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:12.257 03:19:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:12.257 03:19:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:12.257 03:19:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:12.257 03:19:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:12.257 03:19:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:12.257 03:19:49 -- nvmf/common.sh@295 -- # net_devs=() 00:06:12.257 03:19:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:12.257 03:19:49 -- nvmf/common.sh@296 -- # e810=() 00:06:12.257 03:19:49 -- nvmf/common.sh@296 -- # local -ga e810 00:06:12.257 03:19:49 -- nvmf/common.sh@297 -- # x722=() 00:06:12.257 03:19:49 -- nvmf/common.sh@297 -- # local -ga x722 00:06:12.257 03:19:49 -- nvmf/common.sh@298 -- # mlx=() 00:06:12.257 03:19:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:12.257 03:19:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.257 03:19:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:12.257 03:19:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:12.257 03:19:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:12.257 03:19:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:12.257 03:19:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:12.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:12.257 03:19:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:12.257 03:19:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:12.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:12.257 03:19:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:12.257 03:19:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:12.257 03:19:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.257 03:19:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:12.257 03:19:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.257 03:19:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:12.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:12.257 03:19:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.257 03:19:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:12.257 03:19:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.257 03:19:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:12.257 03:19:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.257 03:19:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:12.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:12.257 03:19:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.257 03:19:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:12.257 03:19:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:12.257 03:19:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:12.257 03:19:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:12.257 03:19:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.257 03:19:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.257 03:19:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.257 03:19:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:12.257 03:19:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.257 03:19:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.257 03:19:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:12.257 03:19:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.257 03:19:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.257 03:19:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:12.257 03:19:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:12.257 03:19:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.257 03:19:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.257 03:19:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.257 03:19:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.257 03:19:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:12.257 03:19:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.515 03:19:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.515 03:19:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.515 03:19:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:12.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:06:12.515 00:06:12.515 --- 10.0.0.2 ping statistics --- 00:06:12.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.515 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:06:12.515 03:19:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:06:12.515 00:06:12.515 --- 10.0.0.1 ping statistics --- 00:06:12.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.515 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:12.515 03:19:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.515 03:19:49 -- nvmf/common.sh@411 -- # return 0 00:06:12.515 03:19:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:12.515 03:19:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.515 03:19:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:12.515 03:19:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:12.515 03:19:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.515 03:19:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:12.515 03:19:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:12.515 03:19:49 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:12.515 03:19:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:12.515 03:19:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:12.515 03:19:49 -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 03:19:49 -- nvmf/common.sh@470 -- # nvmfpid=157760 00:06:12.515 03:19:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:12.515 03:19:49 -- nvmf/common.sh@471 -- # waitforlisten 157760 00:06:12.515 03:19:49 -- common/autotest_common.sh@817 -- # '[' -z 157760 ']' 00:06:12.515 03:19:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.515 03:19:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.515 03:19:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.515 03:19:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.515 03:19:49 -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 [2024-04-19 03:19:49.926691] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:06:12.515 [2024-04-19 03:19:49.926773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.515 [2024-04-19 03:19:49.996540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.772 [2024-04-19 03:19:50.132224] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.772 [2024-04-19 03:19:50.132292] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.772 [2024-04-19 03:19:50.132320] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.773 [2024-04-19 03:19:50.132332] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.773 [2024-04-19 03:19:50.132342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.773 [2024-04-19 03:19:50.132478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.773 [2024-04-19 03:19:50.132540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.773 [2024-04-19 03:19:50.132508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.773 [2024-04-19 03:19:50.132543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.366 03:19:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.366 03:19:50 -- common/autotest_common.sh@850 -- # return 0 00:06:13.366 03:19:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:13.366 03:19:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:13.366 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.624 03:19:50 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 [2024-04-19 03:19:50.917455] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@26 -- # seq 1 4 00:06:13.624 03:19:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:13.624 03:19:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 Null1 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 [2024-04-19 03:19:50.957767] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:13.624 03:19:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 Null2 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:13.624 03:19:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:13.624 03:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 Null3 00:06:13.624 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:13.624 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:13.624 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.624 03:19:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:13.624 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.624 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:13.625 03:19:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:13.625 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.625 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 Null4 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:13.625 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.625 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:13.625 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.625 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:13.625 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.625 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:13.625 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.625 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:13.625 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.625 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.625 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.625 03:19:51 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:13.882 00:06:13.882 Discovery Log Number of Records 6, Generation counter 6 00:06:13.882 =====Discovery Log Entry 0====== 00:06:13.882 trtype: tcp 00:06:13.882 adrfam: ipv4 00:06:13.882 subtype: current discovery subsystem 00:06:13.882 treq: not required 00:06:13.882 portid: 0 00:06:13.882 trsvcid: 4420 00:06:13.882 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:13.882 traddr: 10.0.0.2 00:06:13.882 eflags: explicit discovery connections, duplicate discovery information 00:06:13.882 sectype: none 00:06:13.882 =====Discovery Log Entry 1====== 00:06:13.882 trtype: tcp 00:06:13.882 adrfam: ipv4 00:06:13.882 subtype: nvme subsystem 00:06:13.882 treq: not required 00:06:13.882 portid: 0 00:06:13.882 trsvcid: 4420 00:06:13.882 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:13.882 traddr: 10.0.0.2 00:06:13.882 eflags: none 00:06:13.882 sectype: none 00:06:13.882 =====Discovery Log Entry 2====== 00:06:13.882 trtype: tcp 00:06:13.882 adrfam: ipv4 00:06:13.882 subtype: nvme subsystem 00:06:13.882 treq: not required 00:06:13.882 portid: 0 00:06:13.882 trsvcid: 4420 00:06:13.882 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:13.882 traddr: 10.0.0.2 00:06:13.882 eflags: none 00:06:13.882 sectype: none 00:06:13.882 =====Discovery Log Entry 3====== 00:06:13.882 trtype: tcp 00:06:13.882 adrfam: ipv4 00:06:13.882 subtype: nvme subsystem 00:06:13.882 treq: not required 00:06:13.882 portid: 0 00:06:13.882 trsvcid: 4420 00:06:13.882 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:13.882 traddr: 10.0.0.2 00:06:13.883 eflags: none 00:06:13.883 sectype: none 00:06:13.883 =====Discovery Log Entry 4====== 00:06:13.883 trtype: tcp 00:06:13.883 adrfam: ipv4 00:06:13.883 subtype: nvme subsystem 00:06:13.883 treq: not required 00:06:13.883 portid: 0 00:06:13.883 trsvcid: 4420 00:06:13.883 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:13.883 traddr: 10.0.0.2 00:06:13.883 eflags: none 00:06:13.883 sectype: none 00:06:13.883 =====Discovery Log Entry 5====== 00:06:13.883 trtype: tcp 00:06:13.883 adrfam: ipv4 00:06:13.883 subtype: discovery subsystem referral 00:06:13.883 treq: not required 00:06:13.883 portid: 0 00:06:13.883 trsvcid: 4430 00:06:13.883 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:13.883 traddr: 10.0.0.2 00:06:13.883 eflags: none 00:06:13.883 sectype: none 00:06:13.883 03:19:51 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:13.883 Perform nvmf subsystem discovery via RPC 00:06:13.883 03:19:51 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 [2024-04-19 03:19:51.262548] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:13.883 [ 00:06:13.883 { 00:06:13.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:13.883 "subtype": "Discovery", 00:06:13.883 "listen_addresses": [ 00:06:13.883 { 00:06:13.883 "transport": "TCP", 00:06:13.883 "trtype": "TCP", 00:06:13.883 "adrfam": "IPv4", 00:06:13.883 "traddr": "10.0.0.2", 00:06:13.883 "trsvcid": "4420" 00:06:13.883 } 00:06:13.883 ], 00:06:13.883 "allow_any_host": true, 00:06:13.883 "hosts": [] 00:06:13.883 }, 00:06:13.883 { 00:06:13.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:13.883 "subtype": "NVMe", 00:06:13.883 "listen_addresses": [ 00:06:13.883 { 00:06:13.883 "transport": "TCP", 00:06:13.883 "trtype": "TCP", 00:06:13.883 "adrfam": "IPv4", 00:06:13.883 "traddr": "10.0.0.2", 00:06:13.883 "trsvcid": "4420" 00:06:13.883 } 00:06:13.883 ], 00:06:13.883 "allow_any_host": true, 00:06:13.883 "hosts": [], 00:06:13.883 "serial_number": "SPDK00000000000001", 00:06:13.883 "model_number": "SPDK bdev Controller", 00:06:13.883 "max_namespaces": 32, 00:06:13.883 "min_cntlid": 1, 00:06:13.883 "max_cntlid": 65519, 00:06:13.883 "namespaces": [ 00:06:13.883 { 00:06:13.883 "nsid": 1, 00:06:13.883 "bdev_name": "Null1", 00:06:13.883 "name": "Null1", 00:06:13.883 "nguid": "48792F477D3D47B1A4D7DADF4D80A791", 00:06:13.883 "uuid": "48792f47-7d3d-47b1-a4d7-dadf4d80a791" 00:06:13.883 } 00:06:13.883 ] 00:06:13.883 }, 00:06:13.883 { 00:06:13.883 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:13.883 "subtype": "NVMe", 00:06:13.883 "listen_addresses": [ 00:06:13.883 { 00:06:13.883 "transport": "TCP", 00:06:13.883 "trtype": "TCP", 00:06:13.883 "adrfam": "IPv4", 00:06:13.883 "traddr": "10.0.0.2", 00:06:13.883 "trsvcid": "4420" 00:06:13.883 } 00:06:13.883 ], 00:06:13.883 "allow_any_host": true, 00:06:13.883 "hosts": [], 00:06:13.883 "serial_number": "SPDK00000000000002", 00:06:13.883 "model_number": "SPDK bdev Controller", 00:06:13.883 "max_namespaces": 32, 00:06:13.883 "min_cntlid": 1, 00:06:13.883 "max_cntlid": 65519, 00:06:13.883 "namespaces": [ 00:06:13.883 { 00:06:13.883 "nsid": 1, 00:06:13.883 "bdev_name": "Null2", 00:06:13.883 "name": "Null2", 00:06:13.883 "nguid": "6FA60E83153942CAB652D8D0E1DEF03E", 00:06:13.883 "uuid": "6fa60e83-1539-42ca-b652-d8d0e1def03e" 00:06:13.883 } 00:06:13.883 ] 00:06:13.883 }, 00:06:13.883 { 00:06:13.883 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:13.883 "subtype": "NVMe", 00:06:13.883 "listen_addresses": [ 00:06:13.883 { 00:06:13.883 "transport": "TCP", 00:06:13.883 "trtype": "TCP", 00:06:13.883 "adrfam": "IPv4", 00:06:13.883 "traddr": "10.0.0.2", 00:06:13.883 "trsvcid": "4420" 00:06:13.883 } 00:06:13.883 ], 00:06:13.883 "allow_any_host": true, 00:06:13.883 "hosts": [], 00:06:13.883 "serial_number": "SPDK00000000000003", 00:06:13.883 "model_number": "SPDK bdev Controller", 00:06:13.883 "max_namespaces": 32, 00:06:13.883 "min_cntlid": 1, 00:06:13.883 "max_cntlid": 65519, 00:06:13.883 "namespaces": [ 00:06:13.883 { 00:06:13.883 "nsid": 1, 00:06:13.883 "bdev_name": "Null3", 00:06:13.883 "name": "Null3", 00:06:13.883 "nguid": "D21578CDFD1A4613800F5683D324F877", 00:06:13.883 "uuid": "d21578cd-fd1a-4613-800f-5683d324f877" 00:06:13.883 } 00:06:13.883 ] 00:06:13.883 }, 00:06:13.883 { 00:06:13.883 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:13.883 "subtype": "NVMe", 00:06:13.883 "listen_addresses": [ 00:06:13.883 { 00:06:13.883 "transport": "TCP", 00:06:13.883 "trtype": "TCP", 00:06:13.883 "adrfam": "IPv4", 00:06:13.883 "traddr": "10.0.0.2", 00:06:13.883 "trsvcid": "4420" 00:06:13.883 } 00:06:13.883 ], 00:06:13.883 "allow_any_host": true, 00:06:13.883 "hosts": [], 00:06:13.883 "serial_number": "SPDK00000000000004", 00:06:13.883 "model_number": "SPDK bdev Controller", 00:06:13.883 "max_namespaces": 32, 00:06:13.883 "min_cntlid": 1, 00:06:13.883 "max_cntlid": 65519, 00:06:13.883 "namespaces": [ 00:06:13.883 { 00:06:13.883 "nsid": 1, 00:06:13.883 "bdev_name": "Null4", 00:06:13.883 "name": "Null4", 00:06:13.883 "nguid": "1AE61130C0EA4F82A0F560D6F0EFB98B", 00:06:13.883 "uuid": "1ae61130-c0ea-4f82-a0f5-60d6f0efb98b" 00:06:13.883 } 00:06:13.883 ] 00:06:13.883 } 00:06:13.883 ] 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@42 -- # seq 1 4 00:06:13.883 03:19:51 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:13.883 03:19:51 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:13.883 03:19:51 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:13.883 03:19:51 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:13.883 03:19:51 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:13.883 03:19:51 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:13.883 03:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.883 03:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.883 03:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.883 03:19:51 -- target/discovery.sh@49 -- # check_bdevs= 00:06:13.883 03:19:51 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:13.883 03:19:51 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:13.883 03:19:51 -- target/discovery.sh@57 -- # nvmftestfini 00:06:13.883 03:19:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:13.883 03:19:51 -- nvmf/common.sh@117 -- # sync 00:06:13.883 03:19:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:13.883 03:19:51 -- nvmf/common.sh@120 -- # set +e 00:06:13.883 03:19:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:13.884 03:19:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:13.884 rmmod nvme_tcp 00:06:13.884 rmmod nvme_fabrics 00:06:13.884 rmmod nvme_keyring 00:06:13.884 03:19:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:13.884 03:19:51 -- nvmf/common.sh@124 -- # set -e 00:06:13.884 03:19:51 -- nvmf/common.sh@125 -- # return 0 00:06:13.884 03:19:51 -- nvmf/common.sh@478 -- # '[' -n 157760 ']' 00:06:13.884 03:19:51 -- nvmf/common.sh@479 -- # killprocess 157760 00:06:13.884 03:19:51 -- common/autotest_common.sh@936 -- # '[' -z 157760 ']' 00:06:13.884 03:19:51 -- common/autotest_common.sh@940 -- # kill -0 157760 00:06:13.884 03:19:51 -- common/autotest_common.sh@941 -- # uname 00:06:13.884 03:19:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.884 03:19:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 157760 00:06:14.141 03:19:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.142 03:19:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.142 03:19:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 157760' 00:06:14.142 killing process with pid 157760 00:06:14.142 03:19:51 -- common/autotest_common.sh@955 -- # kill 157760 00:06:14.142 [2024-04-19 03:19:51.465983] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:14.142 03:19:51 -- common/autotest_common.sh@960 -- # wait 157760 00:06:14.401 03:19:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:14.401 03:19:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:14.401 03:19:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:14.401 03:19:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:14.401 03:19:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:14.401 03:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.401 03:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:14.401 03:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.307 03:19:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:16.307 00:06:16.307 real 0m6.321s 00:06:16.307 user 0m7.371s 00:06:16.307 sys 0m1.975s 00:06:16.307 03:19:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.307 03:19:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.307 ************************************ 00:06:16.307 END TEST nvmf_discovery 00:06:16.307 ************************************ 00:06:16.307 03:19:53 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:16.307 03:19:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:16.307 03:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.307 03:19:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.567 ************************************ 00:06:16.567 START TEST nvmf_referrals 00:06:16.567 ************************************ 00:06:16.567 03:19:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:16.567 * Looking for test storage... 00:06:16.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.567 03:19:53 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.567 03:19:53 -- nvmf/common.sh@7 -- # uname -s 00:06:16.567 03:19:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.567 03:19:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.567 03:19:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.567 03:19:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.567 03:19:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.567 03:19:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.567 03:19:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.567 03:19:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.567 03:19:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.567 03:19:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.567 03:19:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.567 03:19:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.567 03:19:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.567 03:19:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.567 03:19:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.567 03:19:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.567 03:19:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.567 03:19:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.567 03:19:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.567 03:19:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.567 03:19:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.567 03:19:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.567 03:19:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.567 03:19:53 -- paths/export.sh@5 -- # export PATH 00:06:16.567 03:19:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.567 03:19:53 -- nvmf/common.sh@47 -- # : 0 00:06:16.567 03:19:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.567 03:19:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.567 03:19:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.567 03:19:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.567 03:19:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.567 03:19:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.567 03:19:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.567 03:19:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.567 03:19:53 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:16.567 03:19:53 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:16.567 03:19:53 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:16.567 03:19:53 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:16.567 03:19:53 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:16.567 03:19:53 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:16.567 03:19:53 -- target/referrals.sh@37 -- # nvmftestinit 00:06:16.567 03:19:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:16.567 03:19:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.567 03:19:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:16.567 03:19:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:16.567 03:19:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:16.567 03:19:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.567 03:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.567 03:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.567 03:19:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:16.567 03:19:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:16.567 03:19:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:16.567 03:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:18.471 03:19:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:18.471 03:19:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:18.471 03:19:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:18.471 03:19:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:18.471 03:19:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:18.471 03:19:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:18.471 03:19:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:18.471 03:19:55 -- nvmf/common.sh@295 -- # net_devs=() 00:06:18.471 03:19:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:18.471 03:19:55 -- nvmf/common.sh@296 -- # e810=() 00:06:18.471 03:19:55 -- nvmf/common.sh@296 -- # local -ga e810 00:06:18.471 03:19:55 -- nvmf/common.sh@297 -- # x722=() 00:06:18.471 03:19:55 -- nvmf/common.sh@297 -- # local -ga x722 00:06:18.471 03:19:55 -- nvmf/common.sh@298 -- # mlx=() 00:06:18.471 03:19:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:18.471 03:19:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.471 03:19:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:18.471 03:19:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:18.471 03:19:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:18.471 03:19:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.471 03:19:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:18.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:18.471 03:19:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.471 03:19:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:18.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:18.471 03:19:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:18.471 03:19:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.471 03:19:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.471 03:19:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:18.471 03:19:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.471 03:19:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:18.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:18.471 03:19:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.471 03:19:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.471 03:19:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.471 03:19:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:18.471 03:19:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.471 03:19:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:18.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:18.471 03:19:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.471 03:19:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:18.471 03:19:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:18.471 03:19:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:18.471 03:19:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:18.471 03:19:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.471 03:19:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.471 03:19:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.471 03:19:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:18.471 03:19:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.471 03:19:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.471 03:19:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:18.471 03:19:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.471 03:19:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.471 03:19:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:18.471 03:19:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:18.471 03:19:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.471 03:19:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.471 03:19:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.471 03:19:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.471 03:19:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:18.471 03:19:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.471 03:19:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.471 03:19:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.730 03:19:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:18.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:06:18.730 00:06:18.730 --- 10.0.0.2 ping statistics --- 00:06:18.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.730 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:06:18.730 03:19:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:18.730 00:06:18.730 --- 10.0.0.1 ping statistics --- 00:06:18.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.730 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:18.730 03:19:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.730 03:19:56 -- nvmf/common.sh@411 -- # return 0 00:06:18.730 03:19:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:18.730 03:19:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.730 03:19:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:18.730 03:19:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:18.730 03:19:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.730 03:19:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:18.730 03:19:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:18.730 03:19:56 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:18.730 03:19:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:18.730 03:19:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.730 03:19:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.730 03:19:56 -- nvmf/common.sh@470 -- # nvmfpid=159892 00:06:18.730 03:19:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:18.730 03:19:56 -- nvmf/common.sh@471 -- # waitforlisten 159892 00:06:18.730 03:19:56 -- common/autotest_common.sh@817 -- # '[' -z 159892 ']' 00:06:18.730 03:19:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.730 03:19:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.730 03:19:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.730 03:19:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.730 03:19:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.730 [2024-04-19 03:19:56.114441] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:06:18.730 [2024-04-19 03:19:56.114527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.730 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.730 [2024-04-19 03:19:56.183511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.989 [2024-04-19 03:19:56.303929] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:18.989 [2024-04-19 03:19:56.303993] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:18.990 [2024-04-19 03:19:56.304009] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.990 [2024-04-19 03:19:56.304022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.990 [2024-04-19 03:19:56.304034] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:18.990 [2024-04-19 03:19:56.304137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.990 [2024-04-19 03:19:56.304201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.990 [2024-04-19 03:19:56.304234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.990 [2024-04-19 03:19:56.304237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.555 03:19:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.555 03:19:57 -- common/autotest_common.sh@850 -- # return 0 00:06:19.555 03:19:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:19.555 03:19:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.555 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 03:19:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.555 03:19:57 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.555 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.555 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 [2024-04-19 03:19:57.075447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.555 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.555 03:19:57 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:19.555 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.555 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 [2024-04-19 03:19:57.087674] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:19.555 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.555 03:19:57 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:19.555 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.555 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.555 03:19:57 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:19.555 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.555 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.555 03:19:57 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:19.555 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.555 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.813 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.813 03:19:57 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:19.813 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.813 03:19:57 -- target/referrals.sh@48 -- # jq length 00:06:19.813 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.813 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.813 03:19:57 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:19.813 03:19:57 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:19.813 03:19:57 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:19.813 03:19:57 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:19.813 03:19:57 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:19.813 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.813 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.813 03:19:57 -- target/referrals.sh@21 -- # sort 00:06:19.813 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.813 03:19:57 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:19.813 03:19:57 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:19.813 03:19:57 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:19.813 03:19:57 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:19.813 03:19:57 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:19.813 03:19:57 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:19.813 03:19:57 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:19.813 03:19:57 -- target/referrals.sh@26 -- # sort 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:20.071 03:19:57 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:20.071 03:19:57 -- target/referrals.sh@56 -- # jq length 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:20.071 03:19:57 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:20.071 03:19:57 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:20.071 03:19:57 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # sort 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # echo 00:06:20.071 03:19:57 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:20.071 03:19:57 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:20.071 03:19:57 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:20.071 03:19:57 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:20.071 03:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.071 03:19:57 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:20.071 03:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.071 03:19:57 -- target/referrals.sh@21 -- # sort 00:06:20.071 03:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:20.071 03:19:57 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:20.071 03:19:57 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:20.071 03:19:57 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:20.071 03:19:57 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:20.071 03:19:57 -- target/referrals.sh@26 -- # sort 00:06:20.329 03:19:57 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:20.329 03:19:57 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:20.329 03:19:57 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:20.329 03:19:57 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:20.329 03:19:57 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:20.329 03:19:57 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.329 03:19:57 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:20.587 03:19:57 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:20.587 03:19:57 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:20.587 03:19:57 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:20.587 03:19:57 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:20.587 03:19:57 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.587 03:19:57 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:20.587 03:19:58 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:20.587 03:19:58 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:20.587 03:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.587 03:19:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.587 03:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.587 03:19:58 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:20.587 03:19:58 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:20.587 03:19:58 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:20.587 03:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.587 03:19:58 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:20.587 03:19:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.587 03:19:58 -- target/referrals.sh@21 -- # sort 00:06:20.587 03:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.587 03:19:58 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:20.587 03:19:58 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:20.587 03:19:58 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:20.587 03:19:58 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:20.587 03:19:58 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:20.587 03:19:58 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.587 03:19:58 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:20.587 03:19:58 -- target/referrals.sh@26 -- # sort 00:06:20.587 03:19:58 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:20.845 03:19:58 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:20.845 03:19:58 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:20.845 03:19:58 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:20.845 03:19:58 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:20.845 03:19:58 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.845 03:19:58 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:20.845 03:19:58 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:20.845 03:19:58 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:20.845 03:19:58 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:20.845 03:19:58 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:20.845 03:19:58 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.845 03:19:58 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:20.845 03:19:58 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:20.845 03:19:58 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:20.845 03:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.845 03:19:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.845 03:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.845 03:19:58 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:20.845 03:19:58 -- target/referrals.sh@82 -- # jq length 00:06:20.845 03:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.845 03:19:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.845 03:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.845 03:19:58 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:20.845 03:19:58 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:20.845 03:19:58 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:20.845 03:19:58 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:20.846 03:19:58 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:20.846 03:19:58 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:20.846 03:19:58 -- target/referrals.sh@26 -- # sort 00:06:21.105 03:19:58 -- target/referrals.sh@26 -- # echo 00:06:21.105 03:19:58 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:21.105 03:19:58 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:21.105 03:19:58 -- target/referrals.sh@86 -- # nvmftestfini 00:06:21.105 03:19:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:21.105 03:19:58 -- nvmf/common.sh@117 -- # sync 00:06:21.105 03:19:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:21.105 03:19:58 -- nvmf/common.sh@120 -- # set +e 00:06:21.105 03:19:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:21.105 03:19:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:21.105 rmmod nvme_tcp 00:06:21.105 rmmod nvme_fabrics 00:06:21.105 rmmod nvme_keyring 00:06:21.105 03:19:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:21.105 03:19:58 -- nvmf/common.sh@124 -- # set -e 00:06:21.105 03:19:58 -- nvmf/common.sh@125 -- # return 0 00:06:21.105 03:19:58 -- nvmf/common.sh@478 -- # '[' -n 159892 ']' 00:06:21.105 03:19:58 -- nvmf/common.sh@479 -- # killprocess 159892 00:06:21.105 03:19:58 -- common/autotest_common.sh@936 -- # '[' -z 159892 ']' 00:06:21.105 03:19:58 -- common/autotest_common.sh@940 -- # kill -0 159892 00:06:21.105 03:19:58 -- common/autotest_common.sh@941 -- # uname 00:06:21.105 03:19:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.105 03:19:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 159892 00:06:21.105 03:19:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.105 03:19:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.105 03:19:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 159892' 00:06:21.105 killing process with pid 159892 00:06:21.105 03:19:58 -- common/autotest_common.sh@955 -- # kill 159892 00:06:21.105 03:19:58 -- common/autotest_common.sh@960 -- # wait 159892 00:06:21.373 03:19:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:21.373 03:19:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:21.373 03:19:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:21.373 03:19:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:21.373 03:19:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:21.373 03:19:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.373 03:19:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.373 03:19:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.284 03:20:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:23.284 00:06:23.284 real 0m6.903s 00:06:23.284 user 0m11.308s 00:06:23.284 sys 0m1.910s 00:06:23.284 03:20:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.284 03:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.284 ************************************ 00:06:23.284 END TEST nvmf_referrals 00:06:23.284 ************************************ 00:06:23.543 03:20:00 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:23.543 03:20:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:23.543 03:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.543 03:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.543 ************************************ 00:06:23.543 START TEST nvmf_connect_disconnect 00:06:23.543 ************************************ 00:06:23.543 03:20:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:23.543 * Looking for test storage... 00:06:23.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.543 03:20:01 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.543 03:20:01 -- nvmf/common.sh@7 -- # uname -s 00:06:23.543 03:20:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.543 03:20:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.543 03:20:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.543 03:20:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.543 03:20:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.543 03:20:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.543 03:20:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.543 03:20:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.543 03:20:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.543 03:20:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.543 03:20:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.543 03:20:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.543 03:20:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.543 03:20:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.543 03:20:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.543 03:20:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.543 03:20:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.543 03:20:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.543 03:20:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.543 03:20:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.543 03:20:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.543 03:20:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.543 03:20:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.543 03:20:01 -- paths/export.sh@5 -- # export PATH 00:06:23.543 03:20:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.543 03:20:01 -- nvmf/common.sh@47 -- # : 0 00:06:23.543 03:20:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.543 03:20:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.543 03:20:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.543 03:20:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.543 03:20:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.543 03:20:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.543 03:20:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.543 03:20:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.543 03:20:01 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.543 03:20:01 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:23.543 03:20:01 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:23.543 03:20:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:23.543 03:20:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.543 03:20:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:23.543 03:20:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:23.543 03:20:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:23.543 03:20:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.543 03:20:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:23.543 03:20:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.543 03:20:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:23.543 03:20:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:23.543 03:20:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:23.543 03:20:01 -- common/autotest_common.sh@10 -- # set +x 00:06:26.076 03:20:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:26.076 03:20:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:26.077 03:20:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:26.077 03:20:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:26.077 03:20:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:26.077 03:20:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:26.077 03:20:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:26.077 03:20:03 -- nvmf/common.sh@295 -- # net_devs=() 00:06:26.077 03:20:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:26.077 03:20:03 -- nvmf/common.sh@296 -- # e810=() 00:06:26.077 03:20:03 -- nvmf/common.sh@296 -- # local -ga e810 00:06:26.077 03:20:03 -- nvmf/common.sh@297 -- # x722=() 00:06:26.077 03:20:03 -- nvmf/common.sh@297 -- # local -ga x722 00:06:26.077 03:20:03 -- nvmf/common.sh@298 -- # mlx=() 00:06:26.077 03:20:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:26.077 03:20:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.077 03:20:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:26.077 03:20:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:26.077 03:20:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:26.077 03:20:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.077 03:20:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:26.077 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:26.077 03:20:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.077 03:20:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:26.077 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:26.077 03:20:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:26.077 03:20:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.077 03:20:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.077 03:20:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:26.077 03:20:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.077 03:20:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:26.077 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:26.077 03:20:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.077 03:20:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.077 03:20:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.077 03:20:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:26.077 03:20:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.077 03:20:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:26.077 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:26.077 03:20:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.077 03:20:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:26.077 03:20:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:26.077 03:20:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:26.077 03:20:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.077 03:20:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.077 03:20:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.077 03:20:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:26.077 03:20:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.077 03:20:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.077 03:20:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:26.077 03:20:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.077 03:20:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.077 03:20:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:26.077 03:20:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:26.077 03:20:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.077 03:20:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.077 03:20:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.077 03:20:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.077 03:20:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:26.077 03:20:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.077 03:20:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.077 03:20:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.077 03:20:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:26.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:06:26.077 00:06:26.077 --- 10.0.0.2 ping statistics --- 00:06:26.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.077 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:06:26.077 03:20:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:06:26.077 00:06:26.077 --- 10.0.0.1 ping statistics --- 00:06:26.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.077 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:06:26.077 03:20:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.077 03:20:03 -- nvmf/common.sh@411 -- # return 0 00:06:26.077 03:20:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:26.077 03:20:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.077 03:20:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:26.077 03:20:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.077 03:20:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:26.077 03:20:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:26.077 03:20:03 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:26.077 03:20:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:26.077 03:20:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:26.077 03:20:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.077 03:20:03 -- nvmf/common.sh@470 -- # nvmfpid=162197 00:06:26.077 03:20:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:26.077 03:20:03 -- nvmf/common.sh@471 -- # waitforlisten 162197 00:06:26.077 03:20:03 -- common/autotest_common.sh@817 -- # '[' -z 162197 ']' 00:06:26.077 03:20:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.077 03:20:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.077 03:20:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.077 03:20:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.077 03:20:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.077 [2024-04-19 03:20:03.236668] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:06:26.077 [2024-04-19 03:20:03.236741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.077 [2024-04-19 03:20:03.304988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.077 [2024-04-19 03:20:03.426617] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.077 [2024-04-19 03:20:03.426676] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.077 [2024-04-19 03:20:03.426697] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.077 [2024-04-19 03:20:03.426712] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.077 [2024-04-19 03:20:03.426724] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.077 [2024-04-19 03:20:03.426801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.077 [2024-04-19 03:20:03.426857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.077 [2024-04-19 03:20:03.426888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.077 [2024-04-19 03:20:03.426891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.643 03:20:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.643 03:20:04 -- common/autotest_common.sh@850 -- # return 0 00:06:26.643 03:20:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:26.643 03:20:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:26.643 03:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.643 03:20:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.643 03:20:04 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:26.643 03:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.643 03:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.643 [2024-04-19 03:20:04.190375] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.643 03:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.643 03:20:04 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:26.643 03:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.643 03:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 03:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:26.901 03:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.901 03:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 03:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:26.901 03:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.901 03:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 03:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.901 03:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.901 03:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 [2024-04-19 03:20:04.247338] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.901 03:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:26.901 03:20:04 -- target/connect_disconnect.sh@34 -- # set +x 00:06:29.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:31.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:37.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:40.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:40.329 03:20:17 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:40.329 03:20:17 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:40.329 03:20:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:40.329 03:20:17 -- nvmf/common.sh@117 -- # sync 00:06:40.329 03:20:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:40.329 03:20:17 -- nvmf/common.sh@120 -- # set +e 00:06:40.329 03:20:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:40.329 03:20:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:40.329 rmmod nvme_tcp 00:06:40.329 rmmod nvme_fabrics 00:06:40.329 rmmod nvme_keyring 00:06:40.329 03:20:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:40.329 03:20:17 -- nvmf/common.sh@124 -- # set -e 00:06:40.329 03:20:17 -- nvmf/common.sh@125 -- # return 0 00:06:40.329 03:20:17 -- nvmf/common.sh@478 -- # '[' -n 162197 ']' 00:06:40.329 03:20:17 -- nvmf/common.sh@479 -- # killprocess 162197 00:06:40.329 03:20:17 -- common/autotest_common.sh@936 -- # '[' -z 162197 ']' 00:06:40.329 03:20:17 -- common/autotest_common.sh@940 -- # kill -0 162197 00:06:40.329 03:20:17 -- common/autotest_common.sh@941 -- # uname 00:06:40.329 03:20:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.329 03:20:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 162197 00:06:40.329 03:20:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.329 03:20:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.329 03:20:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 162197' 00:06:40.329 killing process with pid 162197 00:06:40.329 03:20:17 -- common/autotest_common.sh@955 -- # kill 162197 00:06:40.329 03:20:17 -- common/autotest_common.sh@960 -- # wait 162197 00:06:40.587 03:20:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:40.587 03:20:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:40.587 03:20:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:40.587 03:20:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:40.587 03:20:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:40.587 03:20:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.587 03:20:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.587 03:20:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.492 03:20:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:42.492 00:06:42.492 real 0m19.096s 00:06:42.492 user 0m57.756s 00:06:42.492 sys 0m3.195s 00:06:42.751 03:20:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.751 03:20:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.751 ************************************ 00:06:42.751 END TEST nvmf_connect_disconnect 00:06:42.751 ************************************ 00:06:42.752 03:20:20 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:42.752 03:20:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.752 03:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.752 03:20:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.752 ************************************ 00:06:42.752 START TEST nvmf_multitarget 00:06:42.752 ************************************ 00:06:42.752 03:20:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:42.752 * Looking for test storage... 00:06:42.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.752 03:20:20 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.752 03:20:20 -- nvmf/common.sh@7 -- # uname -s 00:06:42.752 03:20:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.752 03:20:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.752 03:20:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.752 03:20:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.752 03:20:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.752 03:20:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.752 03:20:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.752 03:20:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.752 03:20:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.752 03:20:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.752 03:20:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:42.752 03:20:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:42.752 03:20:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.752 03:20:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.752 03:20:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.752 03:20:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.752 03:20:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.752 03:20:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.752 03:20:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.752 03:20:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.752 03:20:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.752 03:20:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.752 03:20:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.752 03:20:20 -- paths/export.sh@5 -- # export PATH 00:06:42.752 03:20:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.752 03:20:20 -- nvmf/common.sh@47 -- # : 0 00:06:42.752 03:20:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.752 03:20:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.752 03:20:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.752 03:20:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.752 03:20:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.752 03:20:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.752 03:20:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.752 03:20:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.752 03:20:20 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:42.752 03:20:20 -- target/multitarget.sh@15 -- # nvmftestinit 00:06:42.752 03:20:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:42.752 03:20:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.752 03:20:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:42.752 03:20:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:42.752 03:20:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:42.752 03:20:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.752 03:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.752 03:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.752 03:20:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:42.752 03:20:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:42.752 03:20:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.752 03:20:20 -- common/autotest_common.sh@10 -- # set +x 00:06:45.285 03:20:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:45.285 03:20:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.285 03:20:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.285 03:20:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.285 03:20:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.285 03:20:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.285 03:20:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.285 03:20:22 -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.285 03:20:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.285 03:20:22 -- nvmf/common.sh@296 -- # e810=() 00:06:45.285 03:20:22 -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.285 03:20:22 -- nvmf/common.sh@297 -- # x722=() 00:06:45.285 03:20:22 -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.285 03:20:22 -- nvmf/common.sh@298 -- # mlx=() 00:06:45.285 03:20:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.285 03:20:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.285 03:20:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.285 03:20:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.285 03:20:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.285 03:20:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.285 03:20:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:45.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:45.285 03:20:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.285 03:20:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:45.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:45.285 03:20:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.285 03:20:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.285 03:20:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.285 03:20:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.285 03:20:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.285 03:20:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:45.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:45.285 03:20:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.285 03:20:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.285 03:20:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.285 03:20:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.285 03:20:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.285 03:20:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:45.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:45.285 03:20:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.285 03:20:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:45.285 03:20:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:45.285 03:20:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:45.285 03:20:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.285 03:20:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.285 03:20:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.285 03:20:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.285 03:20:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.285 03:20:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.285 03:20:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.285 03:20:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.285 03:20:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.285 03:20:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.285 03:20:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.285 03:20:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.285 03:20:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.285 03:20:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.285 03:20:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.285 03:20:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.285 03:20:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.285 03:20:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.285 03:20:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.285 03:20:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:06:45.285 00:06:45.285 --- 10.0.0.2 ping statistics --- 00:06:45.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.285 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:06:45.285 03:20:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:45.285 00:06:45.285 --- 10.0.0.1 ping statistics --- 00:06:45.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.285 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:45.285 03:20:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.285 03:20:22 -- nvmf/common.sh@411 -- # return 0 00:06:45.285 03:20:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:45.285 03:20:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.285 03:20:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:45.285 03:20:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.285 03:20:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:45.285 03:20:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:45.285 03:20:22 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:45.285 03:20:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:45.285 03:20:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.285 03:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.285 03:20:22 -- nvmf/common.sh@470 -- # nvmfpid=165984 00:06:45.285 03:20:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.285 03:20:22 -- nvmf/common.sh@471 -- # waitforlisten 165984 00:06:45.285 03:20:22 -- common/autotest_common.sh@817 -- # '[' -z 165984 ']' 00:06:45.285 03:20:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.285 03:20:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.285 03:20:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.285 03:20:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.285 03:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.286 [2024-04-19 03:20:22.598048] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:06:45.286 [2024-04-19 03:20:22.598128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.286 [2024-04-19 03:20:22.666724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.286 [2024-04-19 03:20:22.786909] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.286 [2024-04-19 03:20:22.786985] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.286 [2024-04-19 03:20:22.787001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.286 [2024-04-19 03:20:22.787015] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.286 [2024-04-19 03:20:22.787026] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.286 [2024-04-19 03:20:22.787116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.286 [2024-04-19 03:20:22.787171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.286 [2024-04-19 03:20:22.787205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.286 [2024-04-19 03:20:22.787207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.219 03:20:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:46.219 03:20:23 -- common/autotest_common.sh@850 -- # return 0 00:06:46.219 03:20:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:46.219 03:20:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:46.219 03:20:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.219 03:20:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.219 03:20:23 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:46.219 03:20:23 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:46.219 03:20:23 -- target/multitarget.sh@21 -- # jq length 00:06:46.219 03:20:23 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:46.219 03:20:23 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:46.219 "nvmf_tgt_1" 00:06:46.219 03:20:23 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:46.476 "nvmf_tgt_2" 00:06:46.476 03:20:23 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:46.476 03:20:23 -- target/multitarget.sh@28 -- # jq length 00:06:46.476 03:20:23 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:46.477 03:20:23 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:46.734 true 00:06:46.734 03:20:24 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:46.734 true 00:06:46.734 03:20:24 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:46.734 03:20:24 -- target/multitarget.sh@35 -- # jq length 00:06:46.991 03:20:24 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:46.991 03:20:24 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:46.991 03:20:24 -- target/multitarget.sh@41 -- # nvmftestfini 00:06:46.991 03:20:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:46.991 03:20:24 -- nvmf/common.sh@117 -- # sync 00:06:46.991 03:20:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:46.991 03:20:24 -- nvmf/common.sh@120 -- # set +e 00:06:46.991 03:20:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:46.991 03:20:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:46.991 rmmod nvme_tcp 00:06:46.991 rmmod nvme_fabrics 00:06:46.991 rmmod nvme_keyring 00:06:46.991 03:20:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:46.991 03:20:24 -- nvmf/common.sh@124 -- # set -e 00:06:46.991 03:20:24 -- nvmf/common.sh@125 -- # return 0 00:06:46.991 03:20:24 -- nvmf/common.sh@478 -- # '[' -n 165984 ']' 00:06:46.991 03:20:24 -- nvmf/common.sh@479 -- # killprocess 165984 00:06:46.991 03:20:24 -- common/autotest_common.sh@936 -- # '[' -z 165984 ']' 00:06:46.991 03:20:24 -- common/autotest_common.sh@940 -- # kill -0 165984 00:06:46.991 03:20:24 -- common/autotest_common.sh@941 -- # uname 00:06:46.991 03:20:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.991 03:20:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 165984 00:06:46.991 03:20:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.991 03:20:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.991 03:20:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 165984' 00:06:46.991 killing process with pid 165984 00:06:46.991 03:20:24 -- common/autotest_common.sh@955 -- # kill 165984 00:06:46.991 03:20:24 -- common/autotest_common.sh@960 -- # wait 165984 00:06:47.248 03:20:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:47.248 03:20:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:47.248 03:20:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:47.248 03:20:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.248 03:20:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.248 03:20:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.248 03:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.248 03:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.148 03:20:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:49.148 00:06:49.148 real 0m6.528s 00:06:49.148 user 0m9.010s 00:06:49.148 sys 0m2.073s 00:06:49.148 03:20:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.148 03:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.148 ************************************ 00:06:49.148 END TEST nvmf_multitarget 00:06:49.148 ************************************ 00:06:49.406 03:20:26 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:49.406 03:20:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.406 03:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.406 03:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.406 ************************************ 00:06:49.406 START TEST nvmf_rpc 00:06:49.406 ************************************ 00:06:49.406 03:20:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:49.406 * Looking for test storage... 00:06:49.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.406 03:20:26 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.406 03:20:26 -- nvmf/common.sh@7 -- # uname -s 00:06:49.406 03:20:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.406 03:20:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.406 03:20:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.406 03:20:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.406 03:20:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.406 03:20:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.406 03:20:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.406 03:20:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.406 03:20:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.406 03:20:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.406 03:20:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.406 03:20:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.406 03:20:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.406 03:20:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.406 03:20:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.406 03:20:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.406 03:20:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.406 03:20:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.406 03:20:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.406 03:20:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.406 03:20:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.406 03:20:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.406 03:20:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.406 03:20:26 -- paths/export.sh@5 -- # export PATH 00:06:49.406 03:20:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.406 03:20:26 -- nvmf/common.sh@47 -- # : 0 00:06:49.406 03:20:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.406 03:20:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.406 03:20:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.406 03:20:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.406 03:20:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.407 03:20:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.407 03:20:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.407 03:20:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.407 03:20:26 -- target/rpc.sh@11 -- # loops=5 00:06:49.407 03:20:26 -- target/rpc.sh@23 -- # nvmftestinit 00:06:49.407 03:20:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:49.407 03:20:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.407 03:20:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:49.407 03:20:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:49.407 03:20:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:49.407 03:20:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.407 03:20:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.407 03:20:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.407 03:20:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:49.407 03:20:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:49.407 03:20:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.407 03:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.325 03:20:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:51.325 03:20:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.325 03:20:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.325 03:20:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.325 03:20:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.325 03:20:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.325 03:20:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.325 03:20:28 -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.325 03:20:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.325 03:20:28 -- nvmf/common.sh@296 -- # e810=() 00:06:51.325 03:20:28 -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.325 03:20:28 -- nvmf/common.sh@297 -- # x722=() 00:06:51.325 03:20:28 -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.325 03:20:28 -- nvmf/common.sh@298 -- # mlx=() 00:06:51.325 03:20:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.325 03:20:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.325 03:20:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.325 03:20:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.325 03:20:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.325 03:20:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.325 03:20:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:51.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:51.325 03:20:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.325 03:20:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:51.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:51.325 03:20:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.325 03:20:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.325 03:20:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.325 03:20:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.325 03:20:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:51.325 03:20:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.325 03:20:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:51.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:51.325 03:20:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.325 03:20:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.325 03:20:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.325 03:20:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:51.582 03:20:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.582 03:20:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:51.582 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:51.582 03:20:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.582 03:20:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:51.582 03:20:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:51.582 03:20:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:51.582 03:20:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:51.582 03:20:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:51.582 03:20:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.582 03:20:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.582 03:20:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.582 03:20:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.582 03:20:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.582 03:20:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.582 03:20:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.582 03:20:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.582 03:20:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.582 03:20:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.582 03:20:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.582 03:20:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.582 03:20:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.582 03:20:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.582 03:20:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.582 03:20:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.582 03:20:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.582 03:20:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.582 03:20:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.582 03:20:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:06:51.582 00:06:51.582 --- 10.0.0.2 ping statistics --- 00:06:51.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.582 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:06:51.582 03:20:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:06:51.582 00:06:51.582 --- 10.0.0.1 ping statistics --- 00:06:51.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.582 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:06:51.582 03:20:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.582 03:20:29 -- nvmf/common.sh@411 -- # return 0 00:06:51.582 03:20:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:51.582 03:20:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.582 03:20:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:51.582 03:20:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:51.582 03:20:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.582 03:20:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:51.582 03:20:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:51.582 03:20:29 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:51.582 03:20:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:51.582 03:20:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:51.582 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:51.582 03:20:29 -- nvmf/common.sh@470 -- # nvmfpid=168216 00:06:51.582 03:20:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.582 03:20:29 -- nvmf/common.sh@471 -- # waitforlisten 168216 00:06:51.582 03:20:29 -- common/autotest_common.sh@817 -- # '[' -z 168216 ']' 00:06:51.582 03:20:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.582 03:20:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.582 03:20:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.582 03:20:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.582 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:51.582 [2024-04-19 03:20:29.075246] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:06:51.582 [2024-04-19 03:20:29.075340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.582 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.839 [2024-04-19 03:20:29.141898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.839 [2024-04-19 03:20:29.253368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.839 [2024-04-19 03:20:29.253436] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.839 [2024-04-19 03:20:29.253465] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.839 [2024-04-19 03:20:29.253477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.839 [2024-04-19 03:20:29.253486] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.839 [2024-04-19 03:20:29.253547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.839 [2024-04-19 03:20:29.253607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.839 [2024-04-19 03:20:29.253637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.839 [2024-04-19 03:20:29.253640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.839 03:20:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.839 03:20:29 -- common/autotest_common.sh@850 -- # return 0 00:06:51.839 03:20:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:51.839 03:20:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:51.839 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.096 03:20:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.096 03:20:29 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:52.096 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.096 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.096 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.096 03:20:29 -- target/rpc.sh@26 -- # stats='{ 00:06:52.096 "tick_rate": 2700000000, 00:06:52.096 "poll_groups": [ 00:06:52.096 { 00:06:52.096 "name": "nvmf_tgt_poll_group_0", 00:06:52.096 "admin_qpairs": 0, 00:06:52.096 "io_qpairs": 0, 00:06:52.096 "current_admin_qpairs": 0, 00:06:52.096 "current_io_qpairs": 0, 00:06:52.096 "pending_bdev_io": 0, 00:06:52.096 "completed_nvme_io": 0, 00:06:52.096 "transports": [] 00:06:52.096 }, 00:06:52.096 { 00:06:52.096 "name": "nvmf_tgt_poll_group_1", 00:06:52.096 "admin_qpairs": 0, 00:06:52.096 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [] 00:06:52.097 }, 00:06:52.097 { 00:06:52.097 "name": "nvmf_tgt_poll_group_2", 00:06:52.097 "admin_qpairs": 0, 00:06:52.097 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [] 00:06:52.097 }, 00:06:52.097 { 00:06:52.097 "name": "nvmf_tgt_poll_group_3", 00:06:52.097 "admin_qpairs": 0, 00:06:52.097 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [] 00:06:52.097 } 00:06:52.097 ] 00:06:52.097 }' 00:06:52.097 03:20:29 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:52.097 03:20:29 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:52.097 03:20:29 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:52.097 03:20:29 -- target/rpc.sh@15 -- # wc -l 00:06:52.097 03:20:29 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:52.097 03:20:29 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:52.097 03:20:29 -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:52.097 03:20:29 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 [2024-04-19 03:20:29.487487] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@33 -- # stats='{ 00:06:52.097 "tick_rate": 2700000000, 00:06:52.097 "poll_groups": [ 00:06:52.097 { 00:06:52.097 "name": "nvmf_tgt_poll_group_0", 00:06:52.097 "admin_qpairs": 0, 00:06:52.097 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [ 00:06:52.097 { 00:06:52.097 "trtype": "TCP" 00:06:52.097 } 00:06:52.097 ] 00:06:52.097 }, 00:06:52.097 { 00:06:52.097 "name": "nvmf_tgt_poll_group_1", 00:06:52.097 "admin_qpairs": 0, 00:06:52.097 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [ 00:06:52.097 { 00:06:52.097 "trtype": "TCP" 00:06:52.097 } 00:06:52.097 ] 00:06:52.097 }, 00:06:52.097 { 00:06:52.097 "name": "nvmf_tgt_poll_group_2", 00:06:52.097 "admin_qpairs": 0, 00:06:52.097 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [ 00:06:52.097 { 00:06:52.097 "trtype": "TCP" 00:06:52.097 } 00:06:52.097 ] 00:06:52.097 }, 00:06:52.097 { 00:06:52.097 "name": "nvmf_tgt_poll_group_3", 00:06:52.097 "admin_qpairs": 0, 00:06:52.097 "io_qpairs": 0, 00:06:52.097 "current_admin_qpairs": 0, 00:06:52.097 "current_io_qpairs": 0, 00:06:52.097 "pending_bdev_io": 0, 00:06:52.097 "completed_nvme_io": 0, 00:06:52.097 "transports": [ 00:06:52.097 { 00:06:52.097 "trtype": "TCP" 00:06:52.097 } 00:06:52.097 ] 00:06:52.097 } 00:06:52.097 ] 00:06:52.097 }' 00:06:52.097 03:20:29 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:52.097 03:20:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:52.097 03:20:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:52.097 03:20:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:52.097 03:20:29 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:52.097 03:20:29 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:52.097 03:20:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:52.097 03:20:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:52.097 03:20:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:52.097 03:20:29 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:52.097 03:20:29 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:52.097 03:20:29 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:52.097 03:20:29 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:52.097 03:20:29 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 Malloc1 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.097 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.097 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 [2024-04-19 03:20:29.632891] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.097 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.097 03:20:29 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:52.097 03:20:29 -- common/autotest_common.sh@638 -- # local es=0 00:06:52.097 03:20:29 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:52.097 03:20:29 -- common/autotest_common.sh@626 -- # local arg=nvme 00:06:52.097 03:20:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:52.097 03:20:29 -- common/autotest_common.sh@630 -- # type -t nvme 00:06:52.097 03:20:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:52.097 03:20:29 -- common/autotest_common.sh@632 -- # type -P nvme 00:06:52.097 03:20:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:52.097 03:20:29 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:06:52.097 03:20:29 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:06:52.097 03:20:29 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:52.354 [2024-04-19 03:20:29.655271] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:52.354 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:52.354 could not add new controller: failed to write to nvme-fabrics device 00:06:52.354 03:20:29 -- common/autotest_common.sh@641 -- # es=1 00:06:52.354 03:20:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:52.354 03:20:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:52.354 03:20:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:52.354 03:20:29 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.354 03:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.354 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.354 03:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.354 03:20:29 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:52.918 03:20:30 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:52.918 03:20:30 -- common/autotest_common.sh@1184 -- # local i=0 00:06:52.918 03:20:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:52.918 03:20:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:52.918 03:20:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:54.816 03:20:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:54.816 03:20:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:54.816 03:20:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:54.816 03:20:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:54.816 03:20:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:54.816 03:20:32 -- common/autotest_common.sh@1194 -- # return 0 00:06:54.816 03:20:32 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:54.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.816 03:20:32 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:54.816 03:20:32 -- common/autotest_common.sh@1205 -- # local i=0 00:06:54.816 03:20:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:54.816 03:20:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.816 03:20:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:54.816 03:20:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.112 03:20:32 -- common/autotest_common.sh@1217 -- # return 0 00:06:55.112 03:20:32 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.112 03:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.112 03:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.112 03:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.112 03:20:32 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:55.112 03:20:32 -- common/autotest_common.sh@638 -- # local es=0 00:06:55.112 03:20:32 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:55.112 03:20:32 -- common/autotest_common.sh@626 -- # local arg=nvme 00:06:55.112 03:20:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:55.112 03:20:32 -- common/autotest_common.sh@630 -- # type -t nvme 00:06:55.112 03:20:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:55.112 03:20:32 -- common/autotest_common.sh@632 -- # type -P nvme 00:06:55.112 03:20:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:55.112 03:20:32 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:06:55.112 03:20:32 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:06:55.112 03:20:32 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:55.112 [2024-04-19 03:20:32.403896] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:55.112 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:55.112 could not add new controller: failed to write to nvme-fabrics device 00:06:55.112 03:20:32 -- common/autotest_common.sh@641 -- # es=1 00:06:55.112 03:20:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:55.112 03:20:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:55.112 03:20:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:55.112 03:20:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:55.112 03:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.112 03:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.112 03:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.112 03:20:32 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:55.677 03:20:32 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:55.677 03:20:32 -- common/autotest_common.sh@1184 -- # local i=0 00:06:55.677 03:20:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:55.677 03:20:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:55.677 03:20:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:57.575 03:20:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:57.575 03:20:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:57.575 03:20:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:57.575 03:20:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:57.575 03:20:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:57.575 03:20:34 -- common/autotest_common.sh@1194 -- # return 0 00:06:57.575 03:20:34 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:57.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.575 03:20:35 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:57.575 03:20:35 -- common/autotest_common.sh@1205 -- # local i=0 00:06:57.575 03:20:35 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:57.575 03:20:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.575 03:20:35 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:57.575 03:20:35 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.575 03:20:35 -- common/autotest_common.sh@1217 -- # return 0 00:06:57.575 03:20:35 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.575 03:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.575 03:20:35 -- common/autotest_common.sh@10 -- # set +x 00:06:57.575 03:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.575 03:20:35 -- target/rpc.sh@81 -- # seq 1 5 00:06:57.575 03:20:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:06:57.575 03:20:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:06:57.575 03:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.575 03:20:35 -- common/autotest_common.sh@10 -- # set +x 00:06:57.575 03:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.575 03:20:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.575 03:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.575 03:20:35 -- common/autotest_common.sh@10 -- # set +x 00:06:57.575 [2024-04-19 03:20:35.114248] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.575 03:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.575 03:20:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:06:57.575 03:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.575 03:20:35 -- common/autotest_common.sh@10 -- # set +x 00:06:57.575 03:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.575 03:20:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:06:57.575 03:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.575 03:20:35 -- common/autotest_common.sh@10 -- # set +x 00:06:57.833 03:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.833 03:20:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.397 03:20:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:06:58.397 03:20:35 -- common/autotest_common.sh@1184 -- # local i=0 00:06:58.397 03:20:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:58.397 03:20:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:58.397 03:20:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:00.296 03:20:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:00.296 03:20:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:00.296 03:20:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:00.296 03:20:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:00.296 03:20:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:00.296 03:20:37 -- common/autotest_common.sh@1194 -- # return 0 00:07:00.296 03:20:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:00.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.553 03:20:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:00.553 03:20:37 -- common/autotest_common.sh@1205 -- # local i=0 00:07:00.553 03:20:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:00.553 03:20:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.553 03:20:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:00.553 03:20:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.554 03:20:37 -- common/autotest_common.sh@1217 -- # return 0 00:07:00.554 03:20:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.554 03:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.554 03:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 03:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.554 03:20:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.554 03:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.554 03:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 03:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.554 03:20:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:00.554 03:20:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:00.554 03:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.554 03:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 03:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.554 03:20:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.554 03:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.554 03:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 [2024-04-19 03:20:37.909598] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.554 03:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.554 03:20:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:00.554 03:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.554 03:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 03:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.554 03:20:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:00.554 03:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.554 03:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 03:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.554 03:20:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.119 03:20:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:01.119 03:20:38 -- common/autotest_common.sh@1184 -- # local i=0 00:07:01.119 03:20:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:01.119 03:20:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:01.119 03:20:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:03.015 03:20:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:03.015 03:20:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:03.015 03:20:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:03.274 03:20:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:03.274 03:20:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:03.274 03:20:40 -- common/autotest_common.sh@1194 -- # return 0 00:07:03.274 03:20:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.274 03:20:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.274 03:20:40 -- common/autotest_common.sh@1205 -- # local i=0 00:07:03.274 03:20:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:03.274 03:20:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.274 03:20:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:03.274 03:20:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.274 03:20:40 -- common/autotest_common.sh@1217 -- # return 0 00:07:03.274 03:20:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.274 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.274 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.274 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.274 03:20:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.274 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.274 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.274 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.274 03:20:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:03.274 03:20:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:03.274 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.274 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.274 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.274 03:20:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.274 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.274 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.274 [2024-04-19 03:20:40.675346] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.274 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.274 03:20:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:03.274 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.274 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.274 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.274 03:20:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:03.274 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.274 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.274 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.274 03:20:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.839 03:20:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.839 03:20:41 -- common/autotest_common.sh@1184 -- # local i=0 00:07:03.839 03:20:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.839 03:20:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:03.839 03:20:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:05.736 03:20:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:05.736 03:20:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:05.736 03:20:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.736 03:20:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:05.736 03:20:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.736 03:20:43 -- common/autotest_common.sh@1194 -- # return 0 00:07:05.736 03:20:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.994 03:20:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.994 03:20:43 -- common/autotest_common.sh@1205 -- # local i=0 00:07:05.994 03:20:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:05.994 03:20:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.994 03:20:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:05.994 03:20:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.994 03:20:43 -- common/autotest_common.sh@1217 -- # return 0 00:07:05.994 03:20:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.994 03:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.994 03:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 03:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.994 03:20:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.994 03:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.994 03:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 03:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.994 03:20:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:05.994 03:20:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:05.994 03:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.994 03:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 03:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.994 03:20:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.994 03:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.994 03:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 [2024-04-19 03:20:43.432994] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.994 03:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.994 03:20:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:05.994 03:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.994 03:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 03:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.994 03:20:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:05.994 03:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.994 03:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 03:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.994 03:20:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.558 03:20:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:06.558 03:20:44 -- common/autotest_common.sh@1184 -- # local i=0 00:07:06.558 03:20:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:06.558 03:20:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:06.558 03:20:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:09.082 03:20:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:09.082 03:20:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:09.082 03:20:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.082 03:20:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:09.082 03:20:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.082 03:20:46 -- common/autotest_common.sh@1194 -- # return 0 00:07:09.082 03:20:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:09.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.082 03:20:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:09.082 03:20:46 -- common/autotest_common.sh@1205 -- # local i=0 00:07:09.082 03:20:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:09.082 03:20:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.082 03:20:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:09.082 03:20:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.082 03:20:46 -- common/autotest_common.sh@1217 -- # return 0 00:07:09.082 03:20:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.082 03:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.082 03:20:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 03:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.082 03:20:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:09.082 03:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.082 03:20:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 03:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.082 03:20:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:09.082 03:20:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:09.082 03:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.082 03:20:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 03:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.082 03:20:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.082 03:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.082 03:20:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 [2024-04-19 03:20:46.199024] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.082 03:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.082 03:20:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:09.082 03:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.082 03:20:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 03:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.082 03:20:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:09.082 03:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.082 03:20:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 03:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.082 03:20:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.339 03:20:46 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.339 03:20:46 -- common/autotest_common.sh@1184 -- # local i=0 00:07:09.339 03:20:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.339 03:20:46 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:09.339 03:20:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:11.232 03:20:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:11.232 03:20:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:11.232 03:20:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:11.491 03:20:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.491 03:20:48 -- common/autotest_common.sh@1194 -- # return 0 00:07:11.491 03:20:48 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.491 03:20:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@1205 -- # local i=0 00:07:11.491 03:20:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:11.491 03:20:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:11.491 03:20:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@1217 -- # return 0 00:07:11.491 03:20:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@99 -- # seq 1 5 00:07:11.491 03:20:48 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:11.491 03:20:48 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 [2024-04-19 03:20:48.892314] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:11.491 03:20:48 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 [2024-04-19 03:20:48.940412] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:11.491 03:20:48 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 [2024-04-19 03:20:48.988584] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.491 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:48 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.491 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.491 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.491 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:11.491 03:20:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.491 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.491 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.491 [2024-04-19 03:20:49.036916] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.491 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.491 03:20:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.491 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.491 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:11.749 03:20:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 [2024-04-19 03:20:49.085063] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:11.749 03:20:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.749 03:20:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 03:20:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.749 03:20:49 -- target/rpc.sh@110 -- # stats='{ 00:07:11.749 "tick_rate": 2700000000, 00:07:11.749 "poll_groups": [ 00:07:11.749 { 00:07:11.749 "name": "nvmf_tgt_poll_group_0", 00:07:11.749 "admin_qpairs": 2, 00:07:11.749 "io_qpairs": 84, 00:07:11.749 "current_admin_qpairs": 0, 00:07:11.749 "current_io_qpairs": 0, 00:07:11.749 "pending_bdev_io": 0, 00:07:11.749 "completed_nvme_io": 184, 00:07:11.749 "transports": [ 00:07:11.749 { 00:07:11.749 "trtype": "TCP" 00:07:11.749 } 00:07:11.749 ] 00:07:11.749 }, 00:07:11.749 { 00:07:11.749 "name": "nvmf_tgt_poll_group_1", 00:07:11.749 "admin_qpairs": 2, 00:07:11.749 "io_qpairs": 84, 00:07:11.749 "current_admin_qpairs": 0, 00:07:11.749 "current_io_qpairs": 0, 00:07:11.749 "pending_bdev_io": 0, 00:07:11.749 "completed_nvme_io": 183, 00:07:11.749 "transports": [ 00:07:11.749 { 00:07:11.749 "trtype": "TCP" 00:07:11.749 } 00:07:11.749 ] 00:07:11.749 }, 00:07:11.749 { 00:07:11.749 "name": "nvmf_tgt_poll_group_2", 00:07:11.749 "admin_qpairs": 1, 00:07:11.749 "io_qpairs": 84, 00:07:11.749 "current_admin_qpairs": 0, 00:07:11.749 "current_io_qpairs": 0, 00:07:11.749 "pending_bdev_io": 0, 00:07:11.749 "completed_nvme_io": 185, 00:07:11.749 "transports": [ 00:07:11.749 { 00:07:11.749 "trtype": "TCP" 00:07:11.749 } 00:07:11.749 ] 00:07:11.749 }, 00:07:11.749 { 00:07:11.749 "name": "nvmf_tgt_poll_group_3", 00:07:11.749 "admin_qpairs": 2, 00:07:11.749 "io_qpairs": 84, 00:07:11.749 "current_admin_qpairs": 0, 00:07:11.749 "current_io_qpairs": 0, 00:07:11.749 "pending_bdev_io": 0, 00:07:11.749 "completed_nvme_io": 134, 00:07:11.749 "transports": [ 00:07:11.749 { 00:07:11.750 "trtype": "TCP" 00:07:11.750 } 00:07:11.750 ] 00:07:11.750 } 00:07:11.750 ] 00:07:11.750 }' 00:07:11.750 03:20:49 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:11.750 03:20:49 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:11.750 03:20:49 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:11.750 03:20:49 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:11.750 03:20:49 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:11.750 03:20:49 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:11.750 03:20:49 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:11.750 03:20:49 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:11.750 03:20:49 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:11.750 03:20:49 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:11.750 03:20:49 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:11.750 03:20:49 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:11.750 03:20:49 -- target/rpc.sh@123 -- # nvmftestfini 00:07:11.750 03:20:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:11.750 03:20:49 -- nvmf/common.sh@117 -- # sync 00:07:11.750 03:20:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.750 03:20:49 -- nvmf/common.sh@120 -- # set +e 00:07:11.750 03:20:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.750 03:20:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.750 rmmod nvme_tcp 00:07:11.750 rmmod nvme_fabrics 00:07:11.750 rmmod nvme_keyring 00:07:11.750 03:20:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.750 03:20:49 -- nvmf/common.sh@124 -- # set -e 00:07:11.750 03:20:49 -- nvmf/common.sh@125 -- # return 0 00:07:11.750 03:20:49 -- nvmf/common.sh@478 -- # '[' -n 168216 ']' 00:07:11.750 03:20:49 -- nvmf/common.sh@479 -- # killprocess 168216 00:07:11.750 03:20:49 -- common/autotest_common.sh@936 -- # '[' -z 168216 ']' 00:07:11.750 03:20:49 -- common/autotest_common.sh@940 -- # kill -0 168216 00:07:11.750 03:20:49 -- common/autotest_common.sh@941 -- # uname 00:07:11.750 03:20:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.750 03:20:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 168216 00:07:11.750 03:20:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.750 03:20:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.750 03:20:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 168216' 00:07:11.750 killing process with pid 168216 00:07:11.750 03:20:49 -- common/autotest_common.sh@955 -- # kill 168216 00:07:11.750 03:20:49 -- common/autotest_common.sh@960 -- # wait 168216 00:07:12.317 03:20:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:12.317 03:20:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:12.317 03:20:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:12.317 03:20:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.317 03:20:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:12.317 03:20:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.317 03:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.317 03:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.222 03:20:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.222 00:07:14.222 real 0m24.840s 00:07:14.222 user 1m20.306s 00:07:14.222 sys 0m3.886s 00:07:14.222 03:20:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.222 03:20:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.222 ************************************ 00:07:14.222 END TEST nvmf_rpc 00:07:14.222 ************************************ 00:07:14.222 03:20:51 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:14.222 03:20:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.222 03:20:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.222 03:20:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.482 ************************************ 00:07:14.482 START TEST nvmf_invalid 00:07:14.482 ************************************ 00:07:14.482 03:20:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:14.482 * Looking for test storage... 00:07:14.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.482 03:20:51 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.482 03:20:51 -- nvmf/common.sh@7 -- # uname -s 00:07:14.482 03:20:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.482 03:20:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.482 03:20:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.482 03:20:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.482 03:20:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.482 03:20:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.482 03:20:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.482 03:20:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.482 03:20:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.482 03:20:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.482 03:20:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.482 03:20:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.482 03:20:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.482 03:20:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.482 03:20:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.482 03:20:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.483 03:20:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.483 03:20:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.483 03:20:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.483 03:20:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.483 03:20:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.483 03:20:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.483 03:20:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.483 03:20:51 -- paths/export.sh@5 -- # export PATH 00:07:14.483 03:20:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.483 03:20:51 -- nvmf/common.sh@47 -- # : 0 00:07:14.483 03:20:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.483 03:20:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.483 03:20:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.483 03:20:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.483 03:20:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.483 03:20:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.483 03:20:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.483 03:20:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.483 03:20:51 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:14.483 03:20:51 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.483 03:20:51 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:14.483 03:20:51 -- target/invalid.sh@14 -- # target=foobar 00:07:14.483 03:20:51 -- target/invalid.sh@16 -- # RANDOM=0 00:07:14.483 03:20:51 -- target/invalid.sh@34 -- # nvmftestinit 00:07:14.483 03:20:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:14.483 03:20:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.483 03:20:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:14.483 03:20:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:14.483 03:20:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:14.483 03:20:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.483 03:20:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.483 03:20:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.483 03:20:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:14.483 03:20:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:14.483 03:20:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.483 03:20:51 -- common/autotest_common.sh@10 -- # set +x 00:07:16.434 03:20:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:16.434 03:20:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.434 03:20:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.434 03:20:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.434 03:20:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.434 03:20:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.434 03:20:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.435 03:20:53 -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.435 03:20:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.435 03:20:53 -- nvmf/common.sh@296 -- # e810=() 00:07:16.435 03:20:53 -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.435 03:20:53 -- nvmf/common.sh@297 -- # x722=() 00:07:16.435 03:20:53 -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.435 03:20:53 -- nvmf/common.sh@298 -- # mlx=() 00:07:16.435 03:20:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.435 03:20:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.435 03:20:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.435 03:20:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.435 03:20:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.435 03:20:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.435 03:20:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:16.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:16.435 03:20:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.435 03:20:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:16.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:16.435 03:20:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.435 03:20:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.435 03:20:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.435 03:20:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:16.435 03:20:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.435 03:20:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:16.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:16.435 03:20:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.435 03:20:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.435 03:20:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.435 03:20:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:16.435 03:20:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.435 03:20:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:16.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:16.435 03:20:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.435 03:20:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:16.435 03:20:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:16.435 03:20:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:16.435 03:20:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:16.435 03:20:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.435 03:20:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.435 03:20:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.435 03:20:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.435 03:20:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.435 03:20:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.435 03:20:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.435 03:20:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.435 03:20:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.435 03:20:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.435 03:20:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.435 03:20:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.435 03:20:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.694 03:20:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.694 03:20:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.694 03:20:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.694 03:20:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.694 03:20:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.694 03:20:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.694 03:20:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:07:16.694 00:07:16.694 --- 10.0.0.2 ping statistics --- 00:07:16.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.694 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:16.694 03:20:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:16.694 00:07:16.694 --- 10.0.0.1 ping statistics --- 00:07:16.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.694 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:16.694 03:20:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.694 03:20:54 -- nvmf/common.sh@411 -- # return 0 00:07:16.694 03:20:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:16.694 03:20:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.694 03:20:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:16.694 03:20:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:16.694 03:20:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.694 03:20:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:16.694 03:20:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:16.694 03:20:54 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:16.694 03:20:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:16.694 03:20:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:16.694 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 03:20:54 -- nvmf/common.sh@470 -- # nvmfpid=172723 00:07:16.694 03:20:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.694 03:20:54 -- nvmf/common.sh@471 -- # waitforlisten 172723 00:07:16.694 03:20:54 -- common/autotest_common.sh@817 -- # '[' -z 172723 ']' 00:07:16.694 03:20:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.694 03:20:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.694 03:20:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.694 03:20:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.694 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 [2024-04-19 03:20:54.163132] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:07:16.694 [2024-04-19 03:20:54.163225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.694 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.694 [2024-04-19 03:20:54.241559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.953 [2024-04-19 03:20:54.380787] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.953 [2024-04-19 03:20:54.380853] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.953 [2024-04-19 03:20:54.380892] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.953 [2024-04-19 03:20:54.380915] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.953 [2024-04-19 03:20:54.380934] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.953 [2024-04-19 03:20:54.381050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.953 [2024-04-19 03:20:54.381113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.953 [2024-04-19 03:20:54.381147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.953 [2024-04-19 03:20:54.381164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.210 03:20:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.210 03:20:54 -- common/autotest_common.sh@850 -- # return 0 00:07:17.210 03:20:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:17.210 03:20:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:17.210 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.210 03:20:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.210 03:20:54 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:17.210 03:20:54 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4386 00:07:17.468 [2024-04-19 03:20:54.798950] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:17.468 03:20:54 -- target/invalid.sh@40 -- # out='request: 00:07:17.468 { 00:07:17.468 "nqn": "nqn.2016-06.io.spdk:cnode4386", 00:07:17.468 "tgt_name": "foobar", 00:07:17.468 "method": "nvmf_create_subsystem", 00:07:17.468 "req_id": 1 00:07:17.468 } 00:07:17.468 Got JSON-RPC error response 00:07:17.468 response: 00:07:17.468 { 00:07:17.468 "code": -32603, 00:07:17.468 "message": "Unable to find target foobar" 00:07:17.468 }' 00:07:17.468 03:20:54 -- target/invalid.sh@41 -- # [[ request: 00:07:17.468 { 00:07:17.468 "nqn": "nqn.2016-06.io.spdk:cnode4386", 00:07:17.468 "tgt_name": "foobar", 00:07:17.468 "method": "nvmf_create_subsystem", 00:07:17.468 "req_id": 1 00:07:17.468 } 00:07:17.468 Got JSON-RPC error response 00:07:17.468 response: 00:07:17.468 { 00:07:17.468 "code": -32603, 00:07:17.468 "message": "Unable to find target foobar" 00:07:17.468 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:17.468 03:20:54 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:17.468 03:20:54 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19354 00:07:17.725 [2024-04-19 03:20:55.047797] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19354: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:17.726 03:20:55 -- target/invalid.sh@45 -- # out='request: 00:07:17.726 { 00:07:17.726 "nqn": "nqn.2016-06.io.spdk:cnode19354", 00:07:17.726 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:17.726 "method": "nvmf_create_subsystem", 00:07:17.726 "req_id": 1 00:07:17.726 } 00:07:17.726 Got JSON-RPC error response 00:07:17.726 response: 00:07:17.726 { 00:07:17.726 "code": -32602, 00:07:17.726 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:17.726 }' 00:07:17.726 03:20:55 -- target/invalid.sh@46 -- # [[ request: 00:07:17.726 { 00:07:17.726 "nqn": "nqn.2016-06.io.spdk:cnode19354", 00:07:17.726 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:17.726 "method": "nvmf_create_subsystem", 00:07:17.726 "req_id": 1 00:07:17.726 } 00:07:17.726 Got JSON-RPC error response 00:07:17.726 response: 00:07:17.726 { 00:07:17.726 "code": -32602, 00:07:17.726 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:17.726 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:17.726 03:20:55 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:17.726 03:20:55 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9322 00:07:17.984 [2024-04-19 03:20:55.300637] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9322: invalid model number 'SPDK_Controller' 00:07:17.984 03:20:55 -- target/invalid.sh@50 -- # out='request: 00:07:17.984 { 00:07:17.984 "nqn": "nqn.2016-06.io.spdk:cnode9322", 00:07:17.984 "model_number": "SPDK_Controller\u001f", 00:07:17.984 "method": "nvmf_create_subsystem", 00:07:17.984 "req_id": 1 00:07:17.984 } 00:07:17.984 Got JSON-RPC error response 00:07:17.984 response: 00:07:17.984 { 00:07:17.984 "code": -32602, 00:07:17.984 "message": "Invalid MN SPDK_Controller\u001f" 00:07:17.984 }' 00:07:17.984 03:20:55 -- target/invalid.sh@51 -- # [[ request: 00:07:17.984 { 00:07:17.984 "nqn": "nqn.2016-06.io.spdk:cnode9322", 00:07:17.984 "model_number": "SPDK_Controller\u001f", 00:07:17.984 "method": "nvmf_create_subsystem", 00:07:17.984 "req_id": 1 00:07:17.984 } 00:07:17.984 Got JSON-RPC error response 00:07:17.984 response: 00:07:17.984 { 00:07:17.984 "code": -32602, 00:07:17.984 "message": "Invalid MN SPDK_Controller\u001f" 00:07:17.984 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:17.984 03:20:55 -- target/invalid.sh@54 -- # gen_random_s 21 00:07:17.984 03:20:55 -- target/invalid.sh@19 -- # local length=21 ll 00:07:17.984 03:20:55 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:17.984 03:20:55 -- target/invalid.sh@21 -- # local chars 00:07:17.984 03:20:55 -- target/invalid.sh@22 -- # local string 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 83 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=S 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 72 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=H 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 35 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+='#' 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 79 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=O 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 93 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=']' 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 95 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=_ 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 71 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=G 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 54 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=6 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 126 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+='~' 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 89 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=Y 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 83 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=S 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 114 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=r 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 73 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=I 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 111 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=o 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 46 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=. 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 68 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=D 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 122 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=z 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 50 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=2 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 91 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+='[' 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 125 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+='}' 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # printf %x 108 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:17.984 03:20:55 -- target/invalid.sh@25 -- # string+=l 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.984 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.984 03:20:55 -- target/invalid.sh@28 -- # [[ S == \- ]] 00:07:17.984 03:20:55 -- target/invalid.sh@31 -- # echo 'SH#O]_G6~YSrIo.Dz2[}l' 00:07:17.984 03:20:55 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'SH#O]_G6~YSrIo.Dz2[}l' nqn.2016-06.io.spdk:cnode27585 00:07:18.243 [2024-04-19 03:20:55.625752] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27585: invalid serial number 'SH#O]_G6~YSrIo.Dz2[}l' 00:07:18.243 03:20:55 -- target/invalid.sh@54 -- # out='request: 00:07:18.243 { 00:07:18.243 "nqn": "nqn.2016-06.io.spdk:cnode27585", 00:07:18.243 "serial_number": "SH#O]_G6~YSrIo.Dz2[}l", 00:07:18.243 "method": "nvmf_create_subsystem", 00:07:18.243 "req_id": 1 00:07:18.243 } 00:07:18.243 Got JSON-RPC error response 00:07:18.243 response: 00:07:18.243 { 00:07:18.243 "code": -32602, 00:07:18.243 "message": "Invalid SN SH#O]_G6~YSrIo.Dz2[}l" 00:07:18.243 }' 00:07:18.243 03:20:55 -- target/invalid.sh@55 -- # [[ request: 00:07:18.243 { 00:07:18.243 "nqn": "nqn.2016-06.io.spdk:cnode27585", 00:07:18.243 "serial_number": "SH#O]_G6~YSrIo.Dz2[}l", 00:07:18.243 "method": "nvmf_create_subsystem", 00:07:18.243 "req_id": 1 00:07:18.243 } 00:07:18.243 Got JSON-RPC error response 00:07:18.243 response: 00:07:18.243 { 00:07:18.243 "code": -32602, 00:07:18.243 "message": "Invalid SN SH#O]_G6~YSrIo.Dz2[}l" 00:07:18.243 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:18.243 03:20:55 -- target/invalid.sh@58 -- # gen_random_s 41 00:07:18.243 03:20:55 -- target/invalid.sh@19 -- # local length=41 ll 00:07:18.243 03:20:55 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:18.243 03:20:55 -- target/invalid.sh@21 -- # local chars 00:07:18.243 03:20:55 -- target/invalid.sh@22 -- # local string 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 66 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=B 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 56 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=8 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 109 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=m 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 64 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=@ 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 43 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=+ 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 102 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=f 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 83 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=S 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 81 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=Q 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.243 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # printf %x 37 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:18.243 03:20:55 -- target/invalid.sh@25 -- # string+=% 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 74 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=J 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 79 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=O 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 124 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+='|' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 119 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=w 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 52 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=4 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 86 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=V 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 95 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=_ 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 89 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=Y 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 100 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=d 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 66 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=B 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 109 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=m 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 82 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=R 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 84 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=T 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 101 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=e 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 75 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=K 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 34 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+='"' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 106 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=j 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 114 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=r 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 107 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=k 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 58 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=: 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 116 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=t 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 126 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+='~' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 118 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=v 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 66 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=B 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 57 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=9 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 36 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+='$' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 127 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=$'\177' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 51 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=3 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 123 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+='{' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 48 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=0 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 88 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+=X 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # printf %x 40 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:18.244 03:20:55 -- target/invalid.sh@25 -- # string+='(' 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.244 03:20:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.244 03:20:55 -- target/invalid.sh@28 -- # [[ B == \- ]] 00:07:18.244 03:20:55 -- target/invalid.sh@31 -- # echo 'B8m@+fSQ%JO|w4V_YdBmRTeK"jrk:t~vB9$3{0X(' 00:07:18.502 03:20:55 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'B8m@+fSQ%JO|w4V_YdBmRTeK"jrk:t~vB9$3{0X(' nqn.2016-06.io.spdk:cnode499 00:07:18.502 [2024-04-19 03:20:56.035053] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode499: invalid model number 'B8m@+fSQ%JO|w4V_YdBmRTeK"jrk:t~vB9$3{0X(' 00:07:18.502 03:20:56 -- target/invalid.sh@58 -- # out='request: 00:07:18.502 { 00:07:18.502 "nqn": "nqn.2016-06.io.spdk:cnode499", 00:07:18.502 "model_number": "B8m@+fSQ%JO|w4V_YdBmRTeK\"jrk:t~vB9$\u007f3{0X(", 00:07:18.502 "method": "nvmf_create_subsystem", 00:07:18.502 "req_id": 1 00:07:18.502 } 00:07:18.502 Got JSON-RPC error response 00:07:18.502 response: 00:07:18.502 { 00:07:18.502 "code": -32602, 00:07:18.502 "message": "Invalid MN B8m@+fSQ%JO|w4V_YdBmRTeK\"jrk:t~vB9$\u007f3{0X(" 00:07:18.502 }' 00:07:18.502 03:20:56 -- target/invalid.sh@59 -- # [[ request: 00:07:18.502 { 00:07:18.502 "nqn": "nqn.2016-06.io.spdk:cnode499", 00:07:18.502 "model_number": "B8m@+fSQ%JO|w4V_YdBmRTeK\"jrk:t~vB9$\u007f3{0X(", 00:07:18.502 "method": "nvmf_create_subsystem", 00:07:18.502 "req_id": 1 00:07:18.502 } 00:07:18.502 Got JSON-RPC error response 00:07:18.502 response: 00:07:18.502 { 00:07:18.502 "code": -32602, 00:07:18.502 "message": "Invalid MN B8m@+fSQ%JO|w4V_YdBmRTeK\"jrk:t~vB9$\u007f3{0X(" 00:07:18.502 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:18.502 03:20:56 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:18.760 [2024-04-19 03:20:56.271923] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.760 03:20:56 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:19.018 03:20:56 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:19.018 03:20:56 -- target/invalid.sh@67 -- # echo '' 00:07:19.018 03:20:56 -- target/invalid.sh@67 -- # head -n 1 00:07:19.018 03:20:56 -- target/invalid.sh@67 -- # IP= 00:07:19.018 03:20:56 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:19.275 [2024-04-19 03:20:56.769523] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:19.275 03:20:56 -- target/invalid.sh@69 -- # out='request: 00:07:19.275 { 00:07:19.275 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:19.275 "listen_address": { 00:07:19.275 "trtype": "tcp", 00:07:19.275 "traddr": "", 00:07:19.275 "trsvcid": "4421" 00:07:19.275 }, 00:07:19.275 "method": "nvmf_subsystem_remove_listener", 00:07:19.275 "req_id": 1 00:07:19.275 } 00:07:19.275 Got JSON-RPC error response 00:07:19.275 response: 00:07:19.275 { 00:07:19.275 "code": -32602, 00:07:19.275 "message": "Invalid parameters" 00:07:19.275 }' 00:07:19.275 03:20:56 -- target/invalid.sh@70 -- # [[ request: 00:07:19.275 { 00:07:19.275 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:19.275 "listen_address": { 00:07:19.275 "trtype": "tcp", 00:07:19.275 "traddr": "", 00:07:19.275 "trsvcid": "4421" 00:07:19.275 }, 00:07:19.275 "method": "nvmf_subsystem_remove_listener", 00:07:19.275 "req_id": 1 00:07:19.275 } 00:07:19.275 Got JSON-RPC error response 00:07:19.275 response: 00:07:19.275 { 00:07:19.275 "code": -32602, 00:07:19.275 "message": "Invalid parameters" 00:07:19.275 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:19.275 03:20:56 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3878 -i 0 00:07:19.533 [2024-04-19 03:20:57.010277] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3878: invalid cntlid range [0-65519] 00:07:19.533 03:20:57 -- target/invalid.sh@73 -- # out='request: 00:07:19.533 { 00:07:19.533 "nqn": "nqn.2016-06.io.spdk:cnode3878", 00:07:19.533 "min_cntlid": 0, 00:07:19.533 "method": "nvmf_create_subsystem", 00:07:19.533 "req_id": 1 00:07:19.533 } 00:07:19.533 Got JSON-RPC error response 00:07:19.533 response: 00:07:19.533 { 00:07:19.533 "code": -32602, 00:07:19.533 "message": "Invalid cntlid range [0-65519]" 00:07:19.533 }' 00:07:19.533 03:20:57 -- target/invalid.sh@74 -- # [[ request: 00:07:19.533 { 00:07:19.533 "nqn": "nqn.2016-06.io.spdk:cnode3878", 00:07:19.533 "min_cntlid": 0, 00:07:19.533 "method": "nvmf_create_subsystem", 00:07:19.533 "req_id": 1 00:07:19.533 } 00:07:19.533 Got JSON-RPC error response 00:07:19.533 response: 00:07:19.533 { 00:07:19.533 "code": -32602, 00:07:19.533 "message": "Invalid cntlid range [0-65519]" 00:07:19.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:19.533 03:20:57 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21525 -i 65520 00:07:19.791 [2024-04-19 03:20:57.259099] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21525: invalid cntlid range [65520-65519] 00:07:19.791 03:20:57 -- target/invalid.sh@75 -- # out='request: 00:07:19.791 { 00:07:19.791 "nqn": "nqn.2016-06.io.spdk:cnode21525", 00:07:19.791 "min_cntlid": 65520, 00:07:19.791 "method": "nvmf_create_subsystem", 00:07:19.791 "req_id": 1 00:07:19.791 } 00:07:19.791 Got JSON-RPC error response 00:07:19.791 response: 00:07:19.791 { 00:07:19.791 "code": -32602, 00:07:19.791 "message": "Invalid cntlid range [65520-65519]" 00:07:19.791 }' 00:07:19.791 03:20:57 -- target/invalid.sh@76 -- # [[ request: 00:07:19.791 { 00:07:19.791 "nqn": "nqn.2016-06.io.spdk:cnode21525", 00:07:19.791 "min_cntlid": 65520, 00:07:19.791 "method": "nvmf_create_subsystem", 00:07:19.791 "req_id": 1 00:07:19.791 } 00:07:19.791 Got JSON-RPC error response 00:07:19.791 response: 00:07:19.791 { 00:07:19.791 "code": -32602, 00:07:19.791 "message": "Invalid cntlid range [65520-65519]" 00:07:19.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:19.791 03:20:57 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8546 -I 0 00:07:20.048 [2024-04-19 03:20:57.495920] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8546: invalid cntlid range [1-0] 00:07:20.048 03:20:57 -- target/invalid.sh@77 -- # out='request: 00:07:20.048 { 00:07:20.048 "nqn": "nqn.2016-06.io.spdk:cnode8546", 00:07:20.048 "max_cntlid": 0, 00:07:20.048 "method": "nvmf_create_subsystem", 00:07:20.048 "req_id": 1 00:07:20.048 } 00:07:20.048 Got JSON-RPC error response 00:07:20.048 response: 00:07:20.048 { 00:07:20.048 "code": -32602, 00:07:20.048 "message": "Invalid cntlid range [1-0]" 00:07:20.048 }' 00:07:20.048 03:20:57 -- target/invalid.sh@78 -- # [[ request: 00:07:20.048 { 00:07:20.048 "nqn": "nqn.2016-06.io.spdk:cnode8546", 00:07:20.048 "max_cntlid": 0, 00:07:20.048 "method": "nvmf_create_subsystem", 00:07:20.048 "req_id": 1 00:07:20.048 } 00:07:20.048 Got JSON-RPC error response 00:07:20.048 response: 00:07:20.048 { 00:07:20.048 "code": -32602, 00:07:20.048 "message": "Invalid cntlid range [1-0]" 00:07:20.048 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:20.048 03:20:57 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23618 -I 65520 00:07:20.306 [2024-04-19 03:20:57.740702] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23618: invalid cntlid range [1-65520] 00:07:20.306 03:20:57 -- target/invalid.sh@79 -- # out='request: 00:07:20.306 { 00:07:20.306 "nqn": "nqn.2016-06.io.spdk:cnode23618", 00:07:20.306 "max_cntlid": 65520, 00:07:20.306 "method": "nvmf_create_subsystem", 00:07:20.306 "req_id": 1 00:07:20.306 } 00:07:20.306 Got JSON-RPC error response 00:07:20.306 response: 00:07:20.306 { 00:07:20.306 "code": -32602, 00:07:20.306 "message": "Invalid cntlid range [1-65520]" 00:07:20.306 }' 00:07:20.306 03:20:57 -- target/invalid.sh@80 -- # [[ request: 00:07:20.306 { 00:07:20.306 "nqn": "nqn.2016-06.io.spdk:cnode23618", 00:07:20.306 "max_cntlid": 65520, 00:07:20.306 "method": "nvmf_create_subsystem", 00:07:20.306 "req_id": 1 00:07:20.306 } 00:07:20.306 Got JSON-RPC error response 00:07:20.306 response: 00:07:20.306 { 00:07:20.306 "code": -32602, 00:07:20.306 "message": "Invalid cntlid range [1-65520]" 00:07:20.306 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:20.306 03:20:57 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20498 -i 6 -I 5 00:07:20.564 [2024-04-19 03:20:57.985536] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20498: invalid cntlid range [6-5] 00:07:20.564 03:20:58 -- target/invalid.sh@83 -- # out='request: 00:07:20.564 { 00:07:20.564 "nqn": "nqn.2016-06.io.spdk:cnode20498", 00:07:20.564 "min_cntlid": 6, 00:07:20.564 "max_cntlid": 5, 00:07:20.564 "method": "nvmf_create_subsystem", 00:07:20.564 "req_id": 1 00:07:20.564 } 00:07:20.564 Got JSON-RPC error response 00:07:20.564 response: 00:07:20.564 { 00:07:20.564 "code": -32602, 00:07:20.564 "message": "Invalid cntlid range [6-5]" 00:07:20.564 }' 00:07:20.564 03:20:58 -- target/invalid.sh@84 -- # [[ request: 00:07:20.564 { 00:07:20.564 "nqn": "nqn.2016-06.io.spdk:cnode20498", 00:07:20.564 "min_cntlid": 6, 00:07:20.564 "max_cntlid": 5, 00:07:20.564 "method": "nvmf_create_subsystem", 00:07:20.564 "req_id": 1 00:07:20.564 } 00:07:20.564 Got JSON-RPC error response 00:07:20.564 response: 00:07:20.564 { 00:07:20.564 "code": -32602, 00:07:20.564 "message": "Invalid cntlid range [6-5]" 00:07:20.564 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:20.564 03:20:58 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:20.564 03:20:58 -- target/invalid.sh@87 -- # out='request: 00:07:20.564 { 00:07:20.564 "name": "foobar", 00:07:20.564 "method": "nvmf_delete_target", 00:07:20.564 "req_id": 1 00:07:20.564 } 00:07:20.564 Got JSON-RPC error response 00:07:20.564 response: 00:07:20.564 { 00:07:20.564 "code": -32602, 00:07:20.564 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:20.564 }' 00:07:20.564 03:20:58 -- target/invalid.sh@88 -- # [[ request: 00:07:20.564 { 00:07:20.564 "name": "foobar", 00:07:20.564 "method": "nvmf_delete_target", 00:07:20.564 "req_id": 1 00:07:20.564 } 00:07:20.564 Got JSON-RPC error response 00:07:20.564 response: 00:07:20.564 { 00:07:20.564 "code": -32602, 00:07:20.564 "message": "The specified target doesn't exist, cannot delete it." 00:07:20.564 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:20.564 03:20:58 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:20.564 03:20:58 -- target/invalid.sh@91 -- # nvmftestfini 00:07:20.564 03:20:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:20.564 03:20:58 -- nvmf/common.sh@117 -- # sync 00:07:20.822 03:20:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.822 03:20:58 -- nvmf/common.sh@120 -- # set +e 00:07:20.822 03:20:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.822 03:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.822 rmmod nvme_tcp 00:07:20.822 rmmod nvme_fabrics 00:07:20.822 rmmod nvme_keyring 00:07:20.822 03:20:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.822 03:20:58 -- nvmf/common.sh@124 -- # set -e 00:07:20.822 03:20:58 -- nvmf/common.sh@125 -- # return 0 00:07:20.822 03:20:58 -- nvmf/common.sh@478 -- # '[' -n 172723 ']' 00:07:20.822 03:20:58 -- nvmf/common.sh@479 -- # killprocess 172723 00:07:20.822 03:20:58 -- common/autotest_common.sh@936 -- # '[' -z 172723 ']' 00:07:20.822 03:20:58 -- common/autotest_common.sh@940 -- # kill -0 172723 00:07:20.822 03:20:58 -- common/autotest_common.sh@941 -- # uname 00:07:20.822 03:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.822 03:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 172723 00:07:20.822 03:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:20.823 03:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:20.823 03:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 172723' 00:07:20.823 killing process with pid 172723 00:07:20.823 03:20:58 -- common/autotest_common.sh@955 -- # kill 172723 00:07:20.823 03:20:58 -- common/autotest_common.sh@960 -- # wait 172723 00:07:21.082 03:20:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:21.082 03:20:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:21.082 03:20:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:21.082 03:20:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.082 03:20:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.082 03:20:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.082 03:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.082 03:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.985 03:21:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.985 00:07:22.985 real 0m8.745s 00:07:22.985 user 0m19.978s 00:07:22.985 sys 0m2.539s 00:07:22.985 03:21:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.985 03:21:00 -- common/autotest_common.sh@10 -- # set +x 00:07:22.985 ************************************ 00:07:22.985 END TEST nvmf_invalid 00:07:22.985 ************************************ 00:07:23.243 03:21:00 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:23.243 03:21:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:23.243 03:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.243 03:21:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.243 ************************************ 00:07:23.243 START TEST nvmf_abort 00:07:23.243 ************************************ 00:07:23.243 03:21:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:23.243 * Looking for test storage... 00:07:23.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.243 03:21:00 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.243 03:21:00 -- nvmf/common.sh@7 -- # uname -s 00:07:23.243 03:21:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.243 03:21:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.243 03:21:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.244 03:21:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.244 03:21:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.244 03:21:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.244 03:21:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.244 03:21:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.244 03:21:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.244 03:21:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.244 03:21:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.244 03:21:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.244 03:21:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.244 03:21:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.244 03:21:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.244 03:21:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.244 03:21:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.244 03:21:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.244 03:21:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.244 03:21:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.244 03:21:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.244 03:21:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.244 03:21:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.244 03:21:00 -- paths/export.sh@5 -- # export PATH 00:07:23.244 03:21:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.244 03:21:00 -- nvmf/common.sh@47 -- # : 0 00:07:23.244 03:21:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.244 03:21:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.244 03:21:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.244 03:21:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.244 03:21:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.244 03:21:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.244 03:21:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.244 03:21:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.244 03:21:00 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.244 03:21:00 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:23.244 03:21:00 -- target/abort.sh@14 -- # nvmftestinit 00:07:23.244 03:21:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:23.244 03:21:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.244 03:21:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:23.244 03:21:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:23.244 03:21:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:23.244 03:21:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.244 03:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.244 03:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.244 03:21:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:23.244 03:21:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:23.244 03:21:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.244 03:21:00 -- common/autotest_common.sh@10 -- # set +x 00:07:25.781 03:21:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:25.781 03:21:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.781 03:21:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.781 03:21:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.781 03:21:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.781 03:21:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.781 03:21:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.781 03:21:02 -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.781 03:21:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.781 03:21:02 -- nvmf/common.sh@296 -- # e810=() 00:07:25.781 03:21:02 -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.781 03:21:02 -- nvmf/common.sh@297 -- # x722=() 00:07:25.781 03:21:02 -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.781 03:21:02 -- nvmf/common.sh@298 -- # mlx=() 00:07:25.781 03:21:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.781 03:21:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.781 03:21:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.781 03:21:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.781 03:21:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.781 03:21:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.781 03:21:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.781 03:21:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.781 03:21:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.781 03:21:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.781 03:21:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.781 03:21:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.781 03:21:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:25.781 03:21:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.781 03:21:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.781 03:21:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.781 03:21:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.781 03:21:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.781 03:21:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:25.781 03:21:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.781 03:21:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.781 03:21:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.781 03:21:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:25.781 03:21:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:25.781 03:21:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:25.781 03:21:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.781 03:21:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.781 03:21:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.781 03:21:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.781 03:21:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.781 03:21:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.781 03:21:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.781 03:21:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.781 03:21:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.781 03:21:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.781 03:21:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.781 03:21:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.781 03:21:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.781 03:21:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.781 03:21:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.781 03:21:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.781 03:21:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.781 03:21:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.781 03:21:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.781 03:21:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:07:25.781 00:07:25.781 --- 10.0.0.2 ping statistics --- 00:07:25.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.781 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:25.781 03:21:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:25.781 00:07:25.781 --- 10.0.0.1 ping statistics --- 00:07:25.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.781 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:25.781 03:21:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.781 03:21:02 -- nvmf/common.sh@411 -- # return 0 00:07:25.781 03:21:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:25.781 03:21:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.781 03:21:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:25.781 03:21:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.781 03:21:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:25.781 03:21:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:25.781 03:21:02 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:25.781 03:21:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:25.781 03:21:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:25.781 03:21:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.781 03:21:02 -- nvmf/common.sh@470 -- # nvmfpid=175475 00:07:25.781 03:21:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.781 03:21:02 -- nvmf/common.sh@471 -- # waitforlisten 175475 00:07:25.781 03:21:02 -- common/autotest_common.sh@817 -- # '[' -z 175475 ']' 00:07:25.781 03:21:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.781 03:21:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.781 03:21:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.781 03:21:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.781 03:21:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.781 [2024-04-19 03:21:02.934882] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:07:25.781 [2024-04-19 03:21:02.934973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.781 [2024-04-19 03:21:02.999485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.781 [2024-04-19 03:21:03.106700] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.781 [2024-04-19 03:21:03.106756] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.781 [2024-04-19 03:21:03.106784] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.781 [2024-04-19 03:21:03.106795] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.781 [2024-04-19 03:21:03.106805] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.782 [2024-04-19 03:21:03.106893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.782 [2024-04-19 03:21:03.106922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.782 [2024-04-19 03:21:03.106924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.782 03:21:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.782 03:21:03 -- common/autotest_common.sh@850 -- # return 0 00:07:25.782 03:21:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:25.782 03:21:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 03:21:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.782 03:21:03 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 [2024-04-19 03:21:03.245490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 Malloc0 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 Delay0 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 [2024-04-19 03:21:03.319163] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.782 03:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.782 03:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.782 03:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.782 03:21:03 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:26.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.039 [2024-04-19 03:21:03.425874] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:28.580 Initializing NVMe Controllers 00:07:28.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.580 controller IO queue size 128 less than required 00:07:28.580 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:28.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:28.580 Initialization complete. Launching workers. 00:07:28.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33330 00:07:28.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33391, failed to submit 62 00:07:28.580 success 33334, unsuccess 57, failed 0 00:07:28.580 03:21:05 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.580 03:21:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.580 03:21:05 -- common/autotest_common.sh@10 -- # set +x 00:07:28.580 03:21:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.580 03:21:05 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:28.580 03:21:05 -- target/abort.sh@38 -- # nvmftestfini 00:07:28.580 03:21:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:28.580 03:21:05 -- nvmf/common.sh@117 -- # sync 00:07:28.580 03:21:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.580 03:21:05 -- nvmf/common.sh@120 -- # set +e 00:07:28.580 03:21:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.580 03:21:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.580 rmmod nvme_tcp 00:07:28.580 rmmod nvme_fabrics 00:07:28.580 rmmod nvme_keyring 00:07:28.580 03:21:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.580 03:21:05 -- nvmf/common.sh@124 -- # set -e 00:07:28.580 03:21:05 -- nvmf/common.sh@125 -- # return 0 00:07:28.580 03:21:05 -- nvmf/common.sh@478 -- # '[' -n 175475 ']' 00:07:28.580 03:21:05 -- nvmf/common.sh@479 -- # killprocess 175475 00:07:28.580 03:21:05 -- common/autotest_common.sh@936 -- # '[' -z 175475 ']' 00:07:28.580 03:21:05 -- common/autotest_common.sh@940 -- # kill -0 175475 00:07:28.580 03:21:05 -- common/autotest_common.sh@941 -- # uname 00:07:28.580 03:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.580 03:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 175475 00:07:28.581 03:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:28.581 03:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:28.581 03:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 175475' 00:07:28.581 killing process with pid 175475 00:07:28.581 03:21:05 -- common/autotest_common.sh@955 -- # kill 175475 00:07:28.581 03:21:05 -- common/autotest_common.sh@960 -- # wait 175475 00:07:28.581 03:21:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:28.581 03:21:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:28.581 03:21:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:28.581 03:21:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.581 03:21:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.581 03:21:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.581 03:21:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.581 03:21:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.487 03:21:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.487 00:07:30.487 real 0m7.337s 00:07:30.487 user 0m10.526s 00:07:30.487 sys 0m2.544s 00:07:30.487 03:21:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.487 03:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:30.487 ************************************ 00:07:30.487 END TEST nvmf_abort 00:07:30.487 ************************************ 00:07:30.487 03:21:08 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.487 03:21:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:30.487 03:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.487 03:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:30.745 ************************************ 00:07:30.745 START TEST nvmf_ns_hotplug_stress 00:07:30.745 ************************************ 00:07:30.745 03:21:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.745 * Looking for test storage... 00:07:30.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.745 03:21:08 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.745 03:21:08 -- nvmf/common.sh@7 -- # uname -s 00:07:30.745 03:21:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.745 03:21:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.745 03:21:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.745 03:21:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.745 03:21:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.745 03:21:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.745 03:21:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.745 03:21:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.745 03:21:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.745 03:21:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.745 03:21:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.745 03:21:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.745 03:21:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.745 03:21:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.745 03:21:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.745 03:21:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.745 03:21:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.745 03:21:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.745 03:21:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.745 03:21:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.746 03:21:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.746 03:21:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.746 03:21:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.746 03:21:08 -- paths/export.sh@5 -- # export PATH 00:07:30.746 03:21:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.746 03:21:08 -- nvmf/common.sh@47 -- # : 0 00:07:30.746 03:21:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.746 03:21:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.746 03:21:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.746 03:21:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.746 03:21:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.746 03:21:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.746 03:21:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.746 03:21:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.746 03:21:08 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.746 03:21:08 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:07:30.746 03:21:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:30.746 03:21:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.746 03:21:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:30.746 03:21:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:30.746 03:21:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:30.746 03:21:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.746 03:21:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.746 03:21:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.746 03:21:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:30.746 03:21:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:30.746 03:21:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.746 03:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 03:21:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.648 03:21:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.648 03:21:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.648 03:21:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.648 03:21:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.648 03:21:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.648 03:21:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.648 03:21:10 -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.648 03:21:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.648 03:21:10 -- nvmf/common.sh@296 -- # e810=() 00:07:32.648 03:21:10 -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.648 03:21:10 -- nvmf/common.sh@297 -- # x722=() 00:07:32.648 03:21:10 -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.648 03:21:10 -- nvmf/common.sh@298 -- # mlx=() 00:07:32.648 03:21:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.648 03:21:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.648 03:21:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.648 03:21:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.648 03:21:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.648 03:21:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.648 03:21:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.648 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.648 03:21:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.648 03:21:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.648 03:21:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.648 03:21:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.648 03:21:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.648 03:21:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.648 03:21:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.649 03:21:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.649 03:21:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.649 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.649 03:21:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.649 03:21:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.649 03:21:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.649 03:21:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.649 03:21:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.649 03:21:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.649 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.649 03:21:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.649 03:21:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:32.649 03:21:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:32.649 03:21:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:32.649 03:21:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:32.649 03:21:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:32.649 03:21:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.649 03:21:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.649 03:21:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.649 03:21:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.649 03:21:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.649 03:21:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.649 03:21:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.649 03:21:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.649 03:21:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.649 03:21:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.649 03:21:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.649 03:21:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.649 03:21:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.649 03:21:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.649 03:21:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.649 03:21:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.907 03:21:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.907 03:21:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.907 03:21:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.907 03:21:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:07:32.908 00:07:32.908 --- 10.0.0.2 ping statistics --- 00:07:32.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.908 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:07:32.908 03:21:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:32.908 00:07:32.908 --- 10.0.0.1 ping statistics --- 00:07:32.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.908 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:32.908 03:21:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.908 03:21:10 -- nvmf/common.sh@411 -- # return 0 00:07:32.908 03:21:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:32.908 03:21:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.908 03:21:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:32.908 03:21:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:32.908 03:21:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.908 03:21:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:32.908 03:21:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:32.908 03:21:10 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:07:32.908 03:21:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:32.908 03:21:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:32.908 03:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:32.908 03:21:10 -- nvmf/common.sh@470 -- # nvmfpid=178215 00:07:32.908 03:21:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.908 03:21:10 -- nvmf/common.sh@471 -- # waitforlisten 178215 00:07:32.908 03:21:10 -- common/autotest_common.sh@817 -- # '[' -z 178215 ']' 00:07:32.908 03:21:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.908 03:21:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.908 03:21:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.908 03:21:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.908 03:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:32.908 [2024-04-19 03:21:10.329862] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:07:32.908 [2024-04-19 03:21:10.329950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.908 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.908 [2024-04-19 03:21:10.395954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.167 [2024-04-19 03:21:10.504583] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.167 [2024-04-19 03:21:10.504640] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.167 [2024-04-19 03:21:10.504669] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.167 [2024-04-19 03:21:10.504680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.167 [2024-04-19 03:21:10.504690] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.167 [2024-04-19 03:21:10.504826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.167 [2024-04-19 03:21:10.504855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.167 [2024-04-19 03:21:10.504858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.167 03:21:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:33.167 03:21:10 -- common/autotest_common.sh@850 -- # return 0 00:07:33.167 03:21:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:33.167 03:21:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:33.167 03:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.167 03:21:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.167 03:21:10 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:07:33.167 03:21:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.425 [2024-04-19 03:21:10.870643] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.425 03:21:10 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.688 03:21:11 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.982 [2024-04-19 03:21:11.401598] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.982 03:21:11 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.241 03:21:11 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:34.499 Malloc0 00:07:34.499 03:21:11 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.757 Delay0 00:07:34.757 03:21:12 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.014 03:21:12 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:35.272 NULL1 00:07:35.272 03:21:12 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:35.529 03:21:12 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=178622 00:07:35.529 03:21:12 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:35.529 03:21:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:35.529 03:21:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.529 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.901 Read completed with error (sct=0, sc=11) 00:07:36.901 03:21:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.901 03:21:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:07:36.901 03:21:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:37.159 true 00:07:37.159 03:21:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:37.159 03:21:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.090 03:21:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.090 03:21:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:07:38.090 03:21:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:38.347 true 00:07:38.347 03:21:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:38.347 03:21:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.604 03:21:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.861 03:21:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:07:38.861 03:21:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:39.117 true 00:07:39.117 03:21:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:39.117 03:21:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.048 03:21:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.305 03:21:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:07:40.305 03:21:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:40.562 true 00:07:40.562 03:21:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:40.562 03:21:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.820 03:21:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.078 03:21:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:07:41.078 03:21:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:41.078 true 00:07:41.336 03:21:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:41.336 03:21:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.268 03:21:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.526 03:21:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:07:42.526 03:21:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:42.784 true 00:07:42.784 03:21:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:42.784 03:21:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.042 03:21:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.300 03:21:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:07:43.300 03:21:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:43.300 true 00:07:43.557 03:21:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:43.557 03:21:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.489 03:21:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.489 03:21:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:07:44.489 03:21:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:44.747 true 00:07:44.747 03:21:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:44.747 03:21:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.005 03:21:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.263 03:21:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:07:45.263 03:21:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:45.521 true 00:07:45.521 03:21:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:45.521 03:21:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.452 03:21:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.710 03:21:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:07:46.710 03:21:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:46.967 true 00:07:46.967 03:21:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:46.967 03:21:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.224 03:21:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.482 03:21:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:07:47.482 03:21:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.739 true 00:07:47.739 03:21:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:47.739 03:21:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.671 03:21:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.671 03:21:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:07:48.672 03:21:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:48.929 true 00:07:48.929 03:21:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:48.929 03:21:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.187 03:21:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.445 03:21:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:07:49.445 03:21:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:49.703 true 00:07:49.703 03:21:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:49.703 03:21:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.993 03:21:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.251 03:21:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:07:50.251 03:21:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:50.509 true 00:07:50.509 03:21:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:50.509 03:21:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.881 03:21:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.881 03:21:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:07:51.881 03:21:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:52.138 true 00:07:52.138 03:21:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:52.138 03:21:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.396 03:21:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.654 03:21:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:07:52.654 03:21:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:52.911 true 00:07:52.911 03:21:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:52.911 03:21:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.844 03:21:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.101 03:21:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:07:54.101 03:21:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:54.359 true 00:07:54.359 03:21:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:54.359 03:21:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.617 03:21:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.874 03:21:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:07:54.874 03:21:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:55.131 true 00:07:55.131 03:21:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:55.131 03:21:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.062 03:21:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.319 03:21:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:07:56.319 03:21:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:56.577 true 00:07:56.577 03:21:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:56.577 03:21:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.835 03:21:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.092 03:21:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:07:57.092 03:21:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:57.350 true 00:07:57.350 03:21:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:57.350 03:21:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.283 03:21:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.283 03:21:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:07:58.283 03:21:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:58.540 true 00:07:58.540 03:21:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:58.540 03:21:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.798 03:21:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.055 03:21:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:07:59.055 03:21:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:59.312 true 00:07:59.312 03:21:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:07:59.312 03:21:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.243 03:21:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.501 03:21:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:00.501 03:21:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:00.759 true 00:08:00.759 03:21:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:00.759 03:21:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.017 03:21:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.275 03:21:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:01.275 03:21:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:01.275 true 00:08:01.533 03:21:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:01.533 03:21:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.465 03:21:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.465 03:21:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:02.465 03:21:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:02.723 true 00:08:02.980 03:21:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:02.980 03:21:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.238 03:21:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.238 03:21:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:03.238 03:21:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:03.495 true 00:08:03.495 03:21:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:03.495 03:21:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.752 03:21:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.010 03:21:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:04.010 03:21:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:04.267 true 00:08:04.267 03:21:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:04.267 03:21:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.636 03:21:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.636 03:21:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:05.636 03:21:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:05.892 true 00:08:05.892 03:21:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:05.892 03:21:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.855 03:21:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.855 Initializing NVMe Controllers 00:08:06.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:06.855 Controller IO queue size 128, less than required. 00:08:06.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.855 Controller IO queue size 128, less than required. 00:08:06.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:06.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:06.855 Initialization complete. Launching workers. 00:08:06.855 ======================================================== 00:08:06.855 Latency(us) 00:08:06.855 Device Information : IOPS MiB/s Average min max 00:08:06.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 977.63 0.48 73649.41 2467.87 1036077.82 00:08:06.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10904.31 5.32 11738.89 3343.33 450982.30 00:08:06.855 ======================================================== 00:08:06.855 Total : 11881.94 5.80 16832.81 2467.87 1036077.82 00:08:06.855 00:08:06.855 03:21:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:08:06.855 03:21:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:07.114 true 00:08:07.114 03:21:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 178622 00:08:07.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (178622) - No such process 00:08:07.114 03:21:44 -- target/ns_hotplug_stress.sh@44 -- # wait 178622 00:08:07.114 03:21:44 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:07.114 03:21:44 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:07.114 03:21:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:07.114 03:21:44 -- nvmf/common.sh@117 -- # sync 00:08:07.114 03:21:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.114 03:21:44 -- nvmf/common.sh@120 -- # set +e 00:08:07.114 03:21:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.114 03:21:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.114 rmmod nvme_tcp 00:08:07.114 rmmod nvme_fabrics 00:08:07.114 rmmod nvme_keyring 00:08:07.114 03:21:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.114 03:21:44 -- nvmf/common.sh@124 -- # set -e 00:08:07.114 03:21:44 -- nvmf/common.sh@125 -- # return 0 00:08:07.114 03:21:44 -- nvmf/common.sh@478 -- # '[' -n 178215 ']' 00:08:07.114 03:21:44 -- nvmf/common.sh@479 -- # killprocess 178215 00:08:07.114 03:21:44 -- common/autotest_common.sh@936 -- # '[' -z 178215 ']' 00:08:07.114 03:21:44 -- common/autotest_common.sh@940 -- # kill -0 178215 00:08:07.114 03:21:44 -- common/autotest_common.sh@941 -- # uname 00:08:07.114 03:21:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:07.114 03:21:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 178215 00:08:07.114 03:21:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:07.114 03:21:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:07.114 03:21:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 178215' 00:08:07.114 killing process with pid 178215 00:08:07.114 03:21:44 -- common/autotest_common.sh@955 -- # kill 178215 00:08:07.114 03:21:44 -- common/autotest_common.sh@960 -- # wait 178215 00:08:07.371 03:21:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:07.371 03:21:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:07.371 03:21:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:07.371 03:21:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.371 03:21:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.371 03:21:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.371 03:21:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.371 03:21:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.907 03:21:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.907 00:08:09.907 real 0m38.840s 00:08:09.907 user 2m30.925s 00:08:09.907 sys 0m10.020s 00:08:09.907 03:21:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.907 03:21:46 -- common/autotest_common.sh@10 -- # set +x 00:08:09.907 ************************************ 00:08:09.907 END TEST nvmf_ns_hotplug_stress 00:08:09.907 ************************************ 00:08:09.907 03:21:46 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:09.907 03:21:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.907 03:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.907 03:21:46 -- common/autotest_common.sh@10 -- # set +x 00:08:09.907 ************************************ 00:08:09.907 START TEST nvmf_connect_stress 00:08:09.907 ************************************ 00:08:09.907 03:21:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:09.907 * Looking for test storage... 00:08:09.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.907 03:21:47 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.907 03:21:47 -- nvmf/common.sh@7 -- # uname -s 00:08:09.907 03:21:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.907 03:21:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.907 03:21:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.907 03:21:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.907 03:21:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.907 03:21:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.907 03:21:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.907 03:21:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.907 03:21:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.907 03:21:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.907 03:21:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:09.907 03:21:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:09.907 03:21:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.907 03:21:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.907 03:21:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.907 03:21:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.907 03:21:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.907 03:21:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.907 03:21:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.907 03:21:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.907 03:21:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.907 03:21:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.907 03:21:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.907 03:21:47 -- paths/export.sh@5 -- # export PATH 00:08:09.907 03:21:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.907 03:21:47 -- nvmf/common.sh@47 -- # : 0 00:08:09.907 03:21:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.907 03:21:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.907 03:21:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.907 03:21:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.907 03:21:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.907 03:21:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.907 03:21:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.907 03:21:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.907 03:21:47 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:09.908 03:21:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:09.908 03:21:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.908 03:21:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:09.908 03:21:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:09.908 03:21:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:09.908 03:21:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.908 03:21:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.908 03:21:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.908 03:21:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:09.908 03:21:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:09.908 03:21:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.908 03:21:47 -- common/autotest_common.sh@10 -- # set +x 00:08:11.811 03:21:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:11.811 03:21:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.811 03:21:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.811 03:21:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.811 03:21:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.811 03:21:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.811 03:21:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.811 03:21:49 -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.811 03:21:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.811 03:21:49 -- nvmf/common.sh@296 -- # e810=() 00:08:11.811 03:21:49 -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.811 03:21:49 -- nvmf/common.sh@297 -- # x722=() 00:08:11.811 03:21:49 -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.811 03:21:49 -- nvmf/common.sh@298 -- # mlx=() 00:08:11.811 03:21:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.811 03:21:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.811 03:21:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.811 03:21:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.811 03:21:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.811 03:21:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.811 03:21:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:11.811 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:11.811 03:21:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.811 03:21:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:11.811 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:11.811 03:21:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.811 03:21:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.811 03:21:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.811 03:21:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:11.811 03:21:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.811 03:21:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:11.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:11.811 03:21:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.811 03:21:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.811 03:21:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.811 03:21:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:11.811 03:21:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.811 03:21:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:11.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:11.811 03:21:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.811 03:21:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:11.811 03:21:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:11.811 03:21:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:11.811 03:21:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.811 03:21:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.811 03:21:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.811 03:21:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.811 03:21:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.811 03:21:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.811 03:21:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.811 03:21:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.811 03:21:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.811 03:21:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.811 03:21:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.811 03:21:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.811 03:21:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.811 03:21:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.811 03:21:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.811 03:21:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.811 03:21:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.811 03:21:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.811 03:21:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.811 03:21:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:08:11.811 00:08:11.811 --- 10.0.0.2 ping statistics --- 00:08:11.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.811 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:11.811 03:21:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:11.811 00:08:11.811 --- 10.0.0.1 ping statistics --- 00:08:11.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.811 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:11.811 03:21:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.811 03:21:49 -- nvmf/common.sh@411 -- # return 0 00:08:11.811 03:21:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:11.811 03:21:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.811 03:21:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:11.811 03:21:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.811 03:21:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:11.811 03:21:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:11.811 03:21:49 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:11.811 03:21:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:11.811 03:21:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:11.811 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:11.811 03:21:49 -- nvmf/common.sh@470 -- # nvmfpid=184363 00:08:11.812 03:21:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:11.812 03:21:49 -- nvmf/common.sh@471 -- # waitforlisten 184363 00:08:11.812 03:21:49 -- common/autotest_common.sh@817 -- # '[' -z 184363 ']' 00:08:11.812 03:21:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.812 03:21:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:11.812 03:21:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.812 03:21:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:11.812 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:11.812 [2024-04-19 03:21:49.208061] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:08:11.812 [2024-04-19 03:21:49.208130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.812 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.812 [2024-04-19 03:21:49.270866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.070 [2024-04-19 03:21:49.377660] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.070 [2024-04-19 03:21:49.377711] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.070 [2024-04-19 03:21:49.377725] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.070 [2024-04-19 03:21:49.377743] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.070 [2024-04-19 03:21:49.377767] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.070 [2024-04-19 03:21:49.377862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.070 [2024-04-19 03:21:49.377925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.070 [2024-04-19 03:21:49.377928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.070 03:21:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:12.070 03:21:49 -- common/autotest_common.sh@850 -- # return 0 00:08:12.070 03:21:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:12.070 03:21:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:12.070 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 03:21:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.070 03:21:49 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.070 03:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.070 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 [2024-04-19 03:21:49.517889] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.070 03:21:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.070 03:21:49 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:12.070 03:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.070 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 03:21:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.070 03:21:49 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.070 03:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.070 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 [2024-04-19 03:21:49.548529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.070 03:21:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.070 03:21:49 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:12.070 03:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.070 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 NULL1 00:08:12.070 03:21:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.070 03:21:49 -- target/connect_stress.sh@21 -- # PERF_PID=184396 00:08:12.071 03:21:49 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:12.071 03:21:49 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:12.071 03:21:49 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # seq 1 20 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:12.071 03:21:49 -- target/connect_stress.sh@28 -- # cat 00:08:12.071 03:21:49 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:12.071 03:21:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.071 03:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.071 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.636 03:21:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.636 03:21:49 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:12.636 03:21:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.636 03:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.636 03:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.894 03:21:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.894 03:21:50 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:12.894 03:21:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.894 03:21:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.894 03:21:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.152 03:21:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.152 03:21:50 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:13.152 03:21:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:13.152 03:21:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.152 03:21:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.410 03:21:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.410 03:21:50 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:13.410 03:21:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:13.410 03:21:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.410 03:21:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 03:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.667 03:21:51 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:13.667 03:21:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:13.667 03:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.667 03:21:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 03:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.232 03:21:51 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:14.232 03:21:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:14.232 03:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.232 03:21:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.489 03:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.489 03:21:51 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:14.489 03:21:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:14.489 03:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.489 03:21:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 03:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.747 03:21:52 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:14.747 03:21:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:14.747 03:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.747 03:21:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.004 03:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.004 03:21:52 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:15.004 03:21:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.004 03:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.004 03:21:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.570 03:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.570 03:21:52 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:15.570 03:21:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.570 03:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.570 03:21:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.827 03:21:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.827 03:21:53 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:15.827 03:21:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.827 03:21:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.827 03:21:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.084 03:21:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.084 03:21:53 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:16.084 03:21:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.084 03:21:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.084 03:21:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.342 03:21:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.342 03:21:53 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:16.342 03:21:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.342 03:21:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.342 03:21:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.600 03:21:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.600 03:21:54 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:16.600 03:21:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.600 03:21:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.600 03:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.166 03:21:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.166 03:21:54 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:17.166 03:21:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.166 03:21:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.166 03:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.424 03:21:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.424 03:21:54 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:17.424 03:21:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.424 03:21:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.424 03:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.682 03:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.682 03:21:55 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:17.682 03:21:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.682 03:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.682 03:21:55 -- common/autotest_common.sh@10 -- # set +x 00:08:17.940 03:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.940 03:21:55 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:17.940 03:21:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.940 03:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.940 03:21:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.198 03:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.198 03:21:55 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:18.198 03:21:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.198 03:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.198 03:21:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.763 03:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.763 03:21:56 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:18.763 03:21:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.763 03:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.763 03:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:19.021 03:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.021 03:21:56 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:19.021 03:21:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.021 03:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.021 03:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:19.279 03:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.279 03:21:56 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:19.279 03:21:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.279 03:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.279 03:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:19.537 03:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.537 03:21:57 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:19.537 03:21:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.537 03:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.537 03:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:19.795 03:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.795 03:21:57 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:19.795 03:21:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.795 03:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.795 03:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.362 03:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.362 03:21:57 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:20.362 03:21:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.362 03:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.362 03:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.620 03:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.620 03:21:57 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:20.620 03:21:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.620 03:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.620 03:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.877 03:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.877 03:21:58 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:20.877 03:21:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.877 03:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.877 03:21:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.135 03:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.135 03:21:58 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:21.135 03:21:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.135 03:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.135 03:21:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.393 03:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.393 03:21:58 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:21.393 03:21:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.393 03:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.393 03:21:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.958 03:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.958 03:21:59 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:21.958 03:21:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.958 03:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.958 03:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.216 03:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.216 03:21:59 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:22.216 03:21:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.216 03:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.216 03:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.216 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:22.474 03:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.474 03:21:59 -- target/connect_stress.sh@34 -- # kill -0 184396 00:08:22.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (184396) - No such process 00:08:22.474 03:21:59 -- target/connect_stress.sh@38 -- # wait 184396 00:08:22.474 03:21:59 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:22.474 03:21:59 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:22.474 03:21:59 -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:22.474 03:21:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:22.474 03:21:59 -- nvmf/common.sh@117 -- # sync 00:08:22.474 03:21:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.475 03:21:59 -- nvmf/common.sh@120 -- # set +e 00:08:22.475 03:21:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.475 03:21:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.475 rmmod nvme_tcp 00:08:22.475 rmmod nvme_fabrics 00:08:22.475 rmmod nvme_keyring 00:08:22.475 03:21:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.475 03:21:59 -- nvmf/common.sh@124 -- # set -e 00:08:22.475 03:21:59 -- nvmf/common.sh@125 -- # return 0 00:08:22.475 03:21:59 -- nvmf/common.sh@478 -- # '[' -n 184363 ']' 00:08:22.475 03:21:59 -- nvmf/common.sh@479 -- # killprocess 184363 00:08:22.475 03:21:59 -- common/autotest_common.sh@936 -- # '[' -z 184363 ']' 00:08:22.475 03:21:59 -- common/autotest_common.sh@940 -- # kill -0 184363 00:08:22.475 03:21:59 -- common/autotest_common.sh@941 -- # uname 00:08:22.475 03:21:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.475 03:21:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 184363 00:08:22.475 03:21:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:22.475 03:21:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:22.475 03:21:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 184363' 00:08:22.475 killing process with pid 184363 00:08:22.475 03:21:59 -- common/autotest_common.sh@955 -- # kill 184363 00:08:22.475 03:21:59 -- common/autotest_common.sh@960 -- # wait 184363 00:08:22.734 03:22:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:22.734 03:22:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:22.734 03:22:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:22.734 03:22:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.734 03:22:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.734 03:22:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.734 03:22:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.734 03:22:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.301 03:22:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.301 00:08:25.301 real 0m15.204s 00:08:25.301 user 0m38.285s 00:08:25.301 sys 0m5.846s 00:08:25.301 03:22:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.301 03:22:02 -- common/autotest_common.sh@10 -- # set +x 00:08:25.301 ************************************ 00:08:25.301 END TEST nvmf_connect_stress 00:08:25.301 ************************************ 00:08:25.301 03:22:02 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:25.301 03:22:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.301 03:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.301 03:22:02 -- common/autotest_common.sh@10 -- # set +x 00:08:25.301 ************************************ 00:08:25.301 START TEST nvmf_fused_ordering 00:08:25.301 ************************************ 00:08:25.301 03:22:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:25.301 * Looking for test storage... 00:08:25.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.301 03:22:02 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.301 03:22:02 -- nvmf/common.sh@7 -- # uname -s 00:08:25.301 03:22:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.301 03:22:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.301 03:22:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.301 03:22:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.301 03:22:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.301 03:22:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.301 03:22:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.301 03:22:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.301 03:22:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.301 03:22:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.301 03:22:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.301 03:22:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.301 03:22:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.301 03:22:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.301 03:22:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.301 03:22:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.301 03:22:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.301 03:22:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.301 03:22:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.301 03:22:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.301 03:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.301 03:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.301 03:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.301 03:22:02 -- paths/export.sh@5 -- # export PATH 00:08:25.302 03:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.302 03:22:02 -- nvmf/common.sh@47 -- # : 0 00:08:25.302 03:22:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.302 03:22:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.302 03:22:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.302 03:22:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.302 03:22:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.302 03:22:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.302 03:22:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.302 03:22:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.302 03:22:02 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:25.302 03:22:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:25.302 03:22:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.302 03:22:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:25.302 03:22:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:25.302 03:22:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:25.302 03:22:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.302 03:22:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.302 03:22:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.302 03:22:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:25.302 03:22:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:25.302 03:22:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.302 03:22:02 -- common/autotest_common.sh@10 -- # set +x 00:08:27.208 03:22:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:27.208 03:22:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.208 03:22:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.208 03:22:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.208 03:22:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.208 03:22:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.208 03:22:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.208 03:22:04 -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.208 03:22:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.208 03:22:04 -- nvmf/common.sh@296 -- # e810=() 00:08:27.208 03:22:04 -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.208 03:22:04 -- nvmf/common.sh@297 -- # x722=() 00:08:27.208 03:22:04 -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.208 03:22:04 -- nvmf/common.sh@298 -- # mlx=() 00:08:27.208 03:22:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.208 03:22:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.208 03:22:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.208 03:22:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.208 03:22:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.208 03:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.208 03:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:27.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:27.208 03:22:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.208 03:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:27.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:27.208 03:22:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.208 03:22:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.208 03:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.208 03:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:27.208 03:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.208 03:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:27.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:27.208 03:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.208 03:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.208 03:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.208 03:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:27.208 03:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.208 03:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:27.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:27.208 03:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.208 03:22:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:27.208 03:22:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:27.208 03:22:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:27.208 03:22:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:27.208 03:22:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.208 03:22:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.208 03:22:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.208 03:22:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.208 03:22:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.208 03:22:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.209 03:22:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.209 03:22:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.209 03:22:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.209 03:22:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.209 03:22:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.209 03:22:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.209 03:22:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.209 03:22:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.209 03:22:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.209 03:22:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.209 03:22:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.209 03:22:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.209 03:22:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.209 03:22:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:08:27.209 00:08:27.209 --- 10.0.0.2 ping statistics --- 00:08:27.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.209 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:08:27.209 03:22:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:27.209 00:08:27.209 --- 10.0.0.1 ping statistics --- 00:08:27.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.209 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:27.209 03:22:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.209 03:22:04 -- nvmf/common.sh@411 -- # return 0 00:08:27.209 03:22:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:27.209 03:22:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.209 03:22:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:27.209 03:22:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:27.209 03:22:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.209 03:22:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:27.209 03:22:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:27.209 03:22:04 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:27.209 03:22:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:27.209 03:22:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:27.209 03:22:04 -- common/autotest_common.sh@10 -- # set +x 00:08:27.209 03:22:04 -- nvmf/common.sh@470 -- # nvmfpid=187629 00:08:27.209 03:22:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.209 03:22:04 -- nvmf/common.sh@471 -- # waitforlisten 187629 00:08:27.209 03:22:04 -- common/autotest_common.sh@817 -- # '[' -z 187629 ']' 00:08:27.209 03:22:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.209 03:22:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:27.209 03:22:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.209 03:22:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:27.209 03:22:04 -- common/autotest_common.sh@10 -- # set +x 00:08:27.209 [2024-04-19 03:22:04.719023] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:08:27.209 [2024-04-19 03:22:04.719092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.209 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.467 [2024-04-19 03:22:04.782691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.467 [2024-04-19 03:22:04.892061] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.467 [2024-04-19 03:22:04.892129] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.467 [2024-04-19 03:22:04.892142] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.467 [2024-04-19 03:22:04.892153] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.467 [2024-04-19 03:22:04.892163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.467 [2024-04-19 03:22:04.892192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.467 03:22:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:27.467 03:22:05 -- common/autotest_common.sh@850 -- # return 0 00:08:27.467 03:22:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:27.467 03:22:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:27.467 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 03:22:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.725 03:22:05 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.725 03:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.725 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 [2024-04-19 03:22:05.046425] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.725 03:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.725 03:22:05 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.725 03:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.725 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 03:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.725 03:22:05 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.725 03:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.725 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 [2024-04-19 03:22:05.062658] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.725 03:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.725 03:22:05 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:27.725 03:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.725 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 NULL1 00:08:27.725 03:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.725 03:22:05 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:27.725 03:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.725 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 03:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.725 03:22:05 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:27.725 03:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.725 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 03:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.725 03:22:05 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:27.725 [2024-04-19 03:22:05.107813] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:08:27.725 [2024-04-19 03:22:05.107857] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187693 ] 00:08:27.725 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.291 Attached to nqn.2016-06.io.spdk:cnode1 00:08:28.291 Namespace ID: 1 size: 1GB 00:08:28.291 fused_ordering(0) 00:08:28.291 fused_ordering(1) 00:08:28.291 fused_ordering(2) 00:08:28.291 fused_ordering(3) 00:08:28.291 fused_ordering(4) 00:08:28.291 fused_ordering(5) 00:08:28.291 fused_ordering(6) 00:08:28.291 fused_ordering(7) 00:08:28.291 fused_ordering(8) 00:08:28.291 fused_ordering(9) 00:08:28.291 fused_ordering(10) 00:08:28.291 fused_ordering(11) 00:08:28.291 fused_ordering(12) 00:08:28.291 fused_ordering(13) 00:08:28.291 fused_ordering(14) 00:08:28.291 fused_ordering(15) 00:08:28.291 fused_ordering(16) 00:08:28.291 fused_ordering(17) 00:08:28.291 fused_ordering(18) 00:08:28.291 fused_ordering(19) 00:08:28.291 fused_ordering(20) 00:08:28.291 fused_ordering(21) 00:08:28.291 fused_ordering(22) 00:08:28.291 fused_ordering(23) 00:08:28.291 fused_ordering(24) 00:08:28.291 fused_ordering(25) 00:08:28.291 fused_ordering(26) 00:08:28.291 fused_ordering(27) 00:08:28.291 fused_ordering(28) 00:08:28.291 fused_ordering(29) 00:08:28.291 fused_ordering(30) 00:08:28.291 fused_ordering(31) 00:08:28.291 fused_ordering(32) 00:08:28.291 fused_ordering(33) 00:08:28.291 fused_ordering(34) 00:08:28.291 fused_ordering(35) 00:08:28.291 fused_ordering(36) 00:08:28.291 fused_ordering(37) 00:08:28.291 fused_ordering(38) 00:08:28.291 fused_ordering(39) 00:08:28.291 fused_ordering(40) 00:08:28.291 fused_ordering(41) 00:08:28.291 fused_ordering(42) 00:08:28.291 fused_ordering(43) 00:08:28.291 fused_ordering(44) 00:08:28.291 fused_ordering(45) 00:08:28.291 fused_ordering(46) 00:08:28.291 fused_ordering(47) 00:08:28.291 fused_ordering(48) 00:08:28.291 fused_ordering(49) 00:08:28.291 fused_ordering(50) 00:08:28.291 fused_ordering(51) 00:08:28.291 fused_ordering(52) 00:08:28.291 fused_ordering(53) 00:08:28.291 fused_ordering(54) 00:08:28.291 fused_ordering(55) 00:08:28.291 fused_ordering(56) 00:08:28.291 fused_ordering(57) 00:08:28.291 fused_ordering(58) 00:08:28.291 fused_ordering(59) 00:08:28.291 fused_ordering(60) 00:08:28.291 fused_ordering(61) 00:08:28.291 fused_ordering(62) 00:08:28.291 fused_ordering(63) 00:08:28.291 fused_ordering(64) 00:08:28.291 fused_ordering(65) 00:08:28.291 fused_ordering(66) 00:08:28.291 fused_ordering(67) 00:08:28.291 fused_ordering(68) 00:08:28.291 fused_ordering(69) 00:08:28.291 fused_ordering(70) 00:08:28.291 fused_ordering(71) 00:08:28.291 fused_ordering(72) 00:08:28.291 fused_ordering(73) 00:08:28.291 fused_ordering(74) 00:08:28.291 fused_ordering(75) 00:08:28.291 fused_ordering(76) 00:08:28.291 fused_ordering(77) 00:08:28.291 fused_ordering(78) 00:08:28.291 fused_ordering(79) 00:08:28.291 fused_ordering(80) 00:08:28.291 fused_ordering(81) 00:08:28.291 fused_ordering(82) 00:08:28.291 fused_ordering(83) 00:08:28.291 fused_ordering(84) 00:08:28.291 fused_ordering(85) 00:08:28.291 fused_ordering(86) 00:08:28.291 fused_ordering(87) 00:08:28.291 fused_ordering(88) 00:08:28.291 fused_ordering(89) 00:08:28.291 fused_ordering(90) 00:08:28.291 fused_ordering(91) 00:08:28.291 fused_ordering(92) 00:08:28.291 fused_ordering(93) 00:08:28.291 fused_ordering(94) 00:08:28.291 fused_ordering(95) 00:08:28.291 fused_ordering(96) 00:08:28.291 fused_ordering(97) 00:08:28.291 fused_ordering(98) 00:08:28.291 fused_ordering(99) 00:08:28.291 fused_ordering(100) 00:08:28.291 fused_ordering(101) 00:08:28.291 fused_ordering(102) 00:08:28.291 fused_ordering(103) 00:08:28.291 fused_ordering(104) 00:08:28.291 fused_ordering(105) 00:08:28.291 fused_ordering(106) 00:08:28.291 fused_ordering(107) 00:08:28.291 fused_ordering(108) 00:08:28.291 fused_ordering(109) 00:08:28.291 fused_ordering(110) 00:08:28.291 fused_ordering(111) 00:08:28.291 fused_ordering(112) 00:08:28.291 fused_ordering(113) 00:08:28.291 fused_ordering(114) 00:08:28.291 fused_ordering(115) 00:08:28.291 fused_ordering(116) 00:08:28.291 fused_ordering(117) 00:08:28.291 fused_ordering(118) 00:08:28.291 fused_ordering(119) 00:08:28.291 fused_ordering(120) 00:08:28.291 fused_ordering(121) 00:08:28.291 fused_ordering(122) 00:08:28.291 fused_ordering(123) 00:08:28.291 fused_ordering(124) 00:08:28.291 fused_ordering(125) 00:08:28.291 fused_ordering(126) 00:08:28.291 fused_ordering(127) 00:08:28.291 fused_ordering(128) 00:08:28.291 fused_ordering(129) 00:08:28.291 fused_ordering(130) 00:08:28.291 fused_ordering(131) 00:08:28.291 fused_ordering(132) 00:08:28.291 fused_ordering(133) 00:08:28.291 fused_ordering(134) 00:08:28.291 fused_ordering(135) 00:08:28.291 fused_ordering(136) 00:08:28.291 fused_ordering(137) 00:08:28.291 fused_ordering(138) 00:08:28.291 fused_ordering(139) 00:08:28.291 fused_ordering(140) 00:08:28.291 fused_ordering(141) 00:08:28.291 fused_ordering(142) 00:08:28.291 fused_ordering(143) 00:08:28.291 fused_ordering(144) 00:08:28.291 fused_ordering(145) 00:08:28.291 fused_ordering(146) 00:08:28.291 fused_ordering(147) 00:08:28.291 fused_ordering(148) 00:08:28.291 fused_ordering(149) 00:08:28.291 fused_ordering(150) 00:08:28.291 fused_ordering(151) 00:08:28.291 fused_ordering(152) 00:08:28.291 fused_ordering(153) 00:08:28.291 fused_ordering(154) 00:08:28.291 fused_ordering(155) 00:08:28.291 fused_ordering(156) 00:08:28.291 fused_ordering(157) 00:08:28.291 fused_ordering(158) 00:08:28.291 fused_ordering(159) 00:08:28.291 fused_ordering(160) 00:08:28.292 fused_ordering(161) 00:08:28.292 fused_ordering(162) 00:08:28.292 fused_ordering(163) 00:08:28.292 fused_ordering(164) 00:08:28.292 fused_ordering(165) 00:08:28.292 fused_ordering(166) 00:08:28.292 fused_ordering(167) 00:08:28.292 fused_ordering(168) 00:08:28.292 fused_ordering(169) 00:08:28.292 fused_ordering(170) 00:08:28.292 fused_ordering(171) 00:08:28.292 fused_ordering(172) 00:08:28.292 fused_ordering(173) 00:08:28.292 fused_ordering(174) 00:08:28.292 fused_ordering(175) 00:08:28.292 fused_ordering(176) 00:08:28.292 fused_ordering(177) 00:08:28.292 fused_ordering(178) 00:08:28.292 fused_ordering(179) 00:08:28.292 fused_ordering(180) 00:08:28.292 fused_ordering(181) 00:08:28.292 fused_ordering(182) 00:08:28.292 fused_ordering(183) 00:08:28.292 fused_ordering(184) 00:08:28.292 fused_ordering(185) 00:08:28.292 fused_ordering(186) 00:08:28.292 fused_ordering(187) 00:08:28.292 fused_ordering(188) 00:08:28.292 fused_ordering(189) 00:08:28.292 fused_ordering(190) 00:08:28.292 fused_ordering(191) 00:08:28.292 fused_ordering(192) 00:08:28.292 fused_ordering(193) 00:08:28.292 fused_ordering(194) 00:08:28.292 fused_ordering(195) 00:08:28.292 fused_ordering(196) 00:08:28.292 fused_ordering(197) 00:08:28.292 fused_ordering(198) 00:08:28.292 fused_ordering(199) 00:08:28.292 fused_ordering(200) 00:08:28.292 fused_ordering(201) 00:08:28.292 fused_ordering(202) 00:08:28.292 fused_ordering(203) 00:08:28.292 fused_ordering(204) 00:08:28.292 fused_ordering(205) 00:08:28.858 fused_ordering(206) 00:08:28.858 fused_ordering(207) 00:08:28.858 fused_ordering(208) 00:08:28.858 fused_ordering(209) 00:08:28.858 fused_ordering(210) 00:08:28.858 fused_ordering(211) 00:08:28.858 fused_ordering(212) 00:08:28.858 fused_ordering(213) 00:08:28.858 fused_ordering(214) 00:08:28.858 fused_ordering(215) 00:08:28.858 fused_ordering(216) 00:08:28.858 fused_ordering(217) 00:08:28.858 fused_ordering(218) 00:08:28.858 fused_ordering(219) 00:08:28.858 fused_ordering(220) 00:08:28.858 fused_ordering(221) 00:08:28.858 fused_ordering(222) 00:08:28.858 fused_ordering(223) 00:08:28.858 fused_ordering(224) 00:08:28.858 fused_ordering(225) 00:08:28.858 fused_ordering(226) 00:08:28.858 fused_ordering(227) 00:08:28.858 fused_ordering(228) 00:08:28.858 fused_ordering(229) 00:08:28.858 fused_ordering(230) 00:08:28.858 fused_ordering(231) 00:08:28.858 fused_ordering(232) 00:08:28.858 fused_ordering(233) 00:08:28.858 fused_ordering(234) 00:08:28.858 fused_ordering(235) 00:08:28.858 fused_ordering(236) 00:08:28.858 fused_ordering(237) 00:08:28.858 fused_ordering(238) 00:08:28.858 fused_ordering(239) 00:08:28.858 fused_ordering(240) 00:08:28.858 fused_ordering(241) 00:08:28.858 fused_ordering(242) 00:08:28.858 fused_ordering(243) 00:08:28.858 fused_ordering(244) 00:08:28.858 fused_ordering(245) 00:08:28.858 fused_ordering(246) 00:08:28.858 fused_ordering(247) 00:08:28.858 fused_ordering(248) 00:08:28.858 fused_ordering(249) 00:08:28.858 fused_ordering(250) 00:08:28.858 fused_ordering(251) 00:08:28.858 fused_ordering(252) 00:08:28.858 fused_ordering(253) 00:08:28.858 fused_ordering(254) 00:08:28.858 fused_ordering(255) 00:08:28.858 fused_ordering(256) 00:08:28.858 fused_ordering(257) 00:08:28.858 fused_ordering(258) 00:08:28.858 fused_ordering(259) 00:08:28.858 fused_ordering(260) 00:08:28.858 fused_ordering(261) 00:08:28.858 fused_ordering(262) 00:08:28.859 fused_ordering(263) 00:08:28.859 fused_ordering(264) 00:08:28.859 fused_ordering(265) 00:08:28.859 fused_ordering(266) 00:08:28.859 fused_ordering(267) 00:08:28.859 fused_ordering(268) 00:08:28.859 fused_ordering(269) 00:08:28.859 fused_ordering(270) 00:08:28.859 fused_ordering(271) 00:08:28.859 fused_ordering(272) 00:08:28.859 fused_ordering(273) 00:08:28.859 fused_ordering(274) 00:08:28.859 fused_ordering(275) 00:08:28.859 fused_ordering(276) 00:08:28.859 fused_ordering(277) 00:08:28.859 fused_ordering(278) 00:08:28.859 fused_ordering(279) 00:08:28.859 fused_ordering(280) 00:08:28.859 fused_ordering(281) 00:08:28.859 fused_ordering(282) 00:08:28.859 fused_ordering(283) 00:08:28.859 fused_ordering(284) 00:08:28.859 fused_ordering(285) 00:08:28.859 fused_ordering(286) 00:08:28.859 fused_ordering(287) 00:08:28.859 fused_ordering(288) 00:08:28.859 fused_ordering(289) 00:08:28.859 fused_ordering(290) 00:08:28.859 fused_ordering(291) 00:08:28.859 fused_ordering(292) 00:08:28.859 fused_ordering(293) 00:08:28.859 fused_ordering(294) 00:08:28.859 fused_ordering(295) 00:08:28.859 fused_ordering(296) 00:08:28.859 fused_ordering(297) 00:08:28.859 fused_ordering(298) 00:08:28.859 fused_ordering(299) 00:08:28.859 fused_ordering(300) 00:08:28.859 fused_ordering(301) 00:08:28.859 fused_ordering(302) 00:08:28.859 fused_ordering(303) 00:08:28.859 fused_ordering(304) 00:08:28.859 fused_ordering(305) 00:08:28.859 fused_ordering(306) 00:08:28.859 fused_ordering(307) 00:08:28.859 fused_ordering(308) 00:08:28.859 fused_ordering(309) 00:08:28.859 fused_ordering(310) 00:08:28.859 fused_ordering(311) 00:08:28.859 fused_ordering(312) 00:08:28.859 fused_ordering(313) 00:08:28.859 fused_ordering(314) 00:08:28.859 fused_ordering(315) 00:08:28.859 fused_ordering(316) 00:08:28.859 fused_ordering(317) 00:08:28.859 fused_ordering(318) 00:08:28.859 fused_ordering(319) 00:08:28.859 fused_ordering(320) 00:08:28.859 fused_ordering(321) 00:08:28.859 fused_ordering(322) 00:08:28.859 fused_ordering(323) 00:08:28.859 fused_ordering(324) 00:08:28.859 fused_ordering(325) 00:08:28.859 fused_ordering(326) 00:08:28.859 fused_ordering(327) 00:08:28.859 fused_ordering(328) 00:08:28.859 fused_ordering(329) 00:08:28.859 fused_ordering(330) 00:08:28.859 fused_ordering(331) 00:08:28.859 fused_ordering(332) 00:08:28.859 fused_ordering(333) 00:08:28.859 fused_ordering(334) 00:08:28.859 fused_ordering(335) 00:08:28.859 fused_ordering(336) 00:08:28.859 fused_ordering(337) 00:08:28.859 fused_ordering(338) 00:08:28.859 fused_ordering(339) 00:08:28.859 fused_ordering(340) 00:08:28.859 fused_ordering(341) 00:08:28.859 fused_ordering(342) 00:08:28.859 fused_ordering(343) 00:08:28.859 fused_ordering(344) 00:08:28.859 fused_ordering(345) 00:08:28.859 fused_ordering(346) 00:08:28.859 fused_ordering(347) 00:08:28.859 fused_ordering(348) 00:08:28.859 fused_ordering(349) 00:08:28.859 fused_ordering(350) 00:08:28.859 fused_ordering(351) 00:08:28.859 fused_ordering(352) 00:08:28.859 fused_ordering(353) 00:08:28.859 fused_ordering(354) 00:08:28.859 fused_ordering(355) 00:08:28.859 fused_ordering(356) 00:08:28.859 fused_ordering(357) 00:08:28.859 fused_ordering(358) 00:08:28.859 fused_ordering(359) 00:08:28.859 fused_ordering(360) 00:08:28.859 fused_ordering(361) 00:08:28.859 fused_ordering(362) 00:08:28.859 fused_ordering(363) 00:08:28.859 fused_ordering(364) 00:08:28.859 fused_ordering(365) 00:08:28.859 fused_ordering(366) 00:08:28.859 fused_ordering(367) 00:08:28.859 fused_ordering(368) 00:08:28.859 fused_ordering(369) 00:08:28.859 fused_ordering(370) 00:08:28.859 fused_ordering(371) 00:08:28.859 fused_ordering(372) 00:08:28.859 fused_ordering(373) 00:08:28.859 fused_ordering(374) 00:08:28.859 fused_ordering(375) 00:08:28.859 fused_ordering(376) 00:08:28.859 fused_ordering(377) 00:08:28.859 fused_ordering(378) 00:08:28.859 fused_ordering(379) 00:08:28.859 fused_ordering(380) 00:08:28.859 fused_ordering(381) 00:08:28.859 fused_ordering(382) 00:08:28.859 fused_ordering(383) 00:08:28.859 fused_ordering(384) 00:08:28.859 fused_ordering(385) 00:08:28.859 fused_ordering(386) 00:08:28.859 fused_ordering(387) 00:08:28.859 fused_ordering(388) 00:08:28.859 fused_ordering(389) 00:08:28.859 fused_ordering(390) 00:08:28.859 fused_ordering(391) 00:08:28.859 fused_ordering(392) 00:08:28.859 fused_ordering(393) 00:08:28.859 fused_ordering(394) 00:08:28.859 fused_ordering(395) 00:08:28.859 fused_ordering(396) 00:08:28.859 fused_ordering(397) 00:08:28.859 fused_ordering(398) 00:08:28.859 fused_ordering(399) 00:08:28.859 fused_ordering(400) 00:08:28.859 fused_ordering(401) 00:08:28.859 fused_ordering(402) 00:08:28.859 fused_ordering(403) 00:08:28.859 fused_ordering(404) 00:08:28.859 fused_ordering(405) 00:08:28.859 fused_ordering(406) 00:08:28.859 fused_ordering(407) 00:08:28.859 fused_ordering(408) 00:08:28.859 fused_ordering(409) 00:08:28.859 fused_ordering(410) 00:08:29.425 fused_ordering(411) 00:08:29.425 fused_ordering(412) 00:08:29.425 fused_ordering(413) 00:08:29.425 fused_ordering(414) 00:08:29.425 fused_ordering(415) 00:08:29.425 fused_ordering(416) 00:08:29.425 fused_ordering(417) 00:08:29.425 fused_ordering(418) 00:08:29.425 fused_ordering(419) 00:08:29.425 fused_ordering(420) 00:08:29.425 fused_ordering(421) 00:08:29.425 fused_ordering(422) 00:08:29.425 fused_ordering(423) 00:08:29.425 fused_ordering(424) 00:08:29.425 fused_ordering(425) 00:08:29.425 fused_ordering(426) 00:08:29.425 fused_ordering(427) 00:08:29.425 fused_ordering(428) 00:08:29.425 fused_ordering(429) 00:08:29.425 fused_ordering(430) 00:08:29.425 fused_ordering(431) 00:08:29.425 fused_ordering(432) 00:08:29.425 fused_ordering(433) 00:08:29.425 fused_ordering(434) 00:08:29.425 fused_ordering(435) 00:08:29.425 fused_ordering(436) 00:08:29.425 fused_ordering(437) 00:08:29.425 fused_ordering(438) 00:08:29.425 fused_ordering(439) 00:08:29.425 fused_ordering(440) 00:08:29.425 fused_ordering(441) 00:08:29.425 fused_ordering(442) 00:08:29.425 fused_ordering(443) 00:08:29.425 fused_ordering(444) 00:08:29.425 fused_ordering(445) 00:08:29.425 fused_ordering(446) 00:08:29.425 fused_ordering(447) 00:08:29.425 fused_ordering(448) 00:08:29.425 fused_ordering(449) 00:08:29.425 fused_ordering(450) 00:08:29.425 fused_ordering(451) 00:08:29.425 fused_ordering(452) 00:08:29.425 fused_ordering(453) 00:08:29.425 fused_ordering(454) 00:08:29.425 fused_ordering(455) 00:08:29.425 fused_ordering(456) 00:08:29.425 fused_ordering(457) 00:08:29.425 fused_ordering(458) 00:08:29.425 fused_ordering(459) 00:08:29.425 fused_ordering(460) 00:08:29.425 fused_ordering(461) 00:08:29.425 fused_ordering(462) 00:08:29.425 fused_ordering(463) 00:08:29.425 fused_ordering(464) 00:08:29.425 fused_ordering(465) 00:08:29.425 fused_ordering(466) 00:08:29.425 fused_ordering(467) 00:08:29.425 fused_ordering(468) 00:08:29.425 fused_ordering(469) 00:08:29.425 fused_ordering(470) 00:08:29.425 fused_ordering(471) 00:08:29.425 fused_ordering(472) 00:08:29.425 fused_ordering(473) 00:08:29.425 fused_ordering(474) 00:08:29.425 fused_ordering(475) 00:08:29.425 fused_ordering(476) 00:08:29.425 fused_ordering(477) 00:08:29.425 fused_ordering(478) 00:08:29.425 fused_ordering(479) 00:08:29.426 fused_ordering(480) 00:08:29.426 fused_ordering(481) 00:08:29.426 fused_ordering(482) 00:08:29.426 fused_ordering(483) 00:08:29.426 fused_ordering(484) 00:08:29.426 fused_ordering(485) 00:08:29.426 fused_ordering(486) 00:08:29.426 fused_ordering(487) 00:08:29.426 fused_ordering(488) 00:08:29.426 fused_ordering(489) 00:08:29.426 fused_ordering(490) 00:08:29.426 fused_ordering(491) 00:08:29.426 fused_ordering(492) 00:08:29.426 fused_ordering(493) 00:08:29.426 fused_ordering(494) 00:08:29.426 fused_ordering(495) 00:08:29.426 fused_ordering(496) 00:08:29.426 fused_ordering(497) 00:08:29.426 fused_ordering(498) 00:08:29.426 fused_ordering(499) 00:08:29.426 fused_ordering(500) 00:08:29.426 fused_ordering(501) 00:08:29.426 fused_ordering(502) 00:08:29.426 fused_ordering(503) 00:08:29.426 fused_ordering(504) 00:08:29.426 fused_ordering(505) 00:08:29.426 fused_ordering(506) 00:08:29.426 fused_ordering(507) 00:08:29.426 fused_ordering(508) 00:08:29.426 fused_ordering(509) 00:08:29.426 fused_ordering(510) 00:08:29.426 fused_ordering(511) 00:08:29.426 fused_ordering(512) 00:08:29.426 fused_ordering(513) 00:08:29.426 fused_ordering(514) 00:08:29.426 fused_ordering(515) 00:08:29.426 fused_ordering(516) 00:08:29.426 fused_ordering(517) 00:08:29.426 fused_ordering(518) 00:08:29.426 fused_ordering(519) 00:08:29.426 fused_ordering(520) 00:08:29.426 fused_ordering(521) 00:08:29.426 fused_ordering(522) 00:08:29.426 fused_ordering(523) 00:08:29.426 fused_ordering(524) 00:08:29.426 fused_ordering(525) 00:08:29.426 fused_ordering(526) 00:08:29.426 fused_ordering(527) 00:08:29.426 fused_ordering(528) 00:08:29.426 fused_ordering(529) 00:08:29.426 fused_ordering(530) 00:08:29.426 fused_ordering(531) 00:08:29.426 fused_ordering(532) 00:08:29.426 fused_ordering(533) 00:08:29.426 fused_ordering(534) 00:08:29.426 fused_ordering(535) 00:08:29.426 fused_ordering(536) 00:08:29.426 fused_ordering(537) 00:08:29.426 fused_ordering(538) 00:08:29.426 fused_ordering(539) 00:08:29.426 fused_ordering(540) 00:08:29.426 fused_ordering(541) 00:08:29.426 fused_ordering(542) 00:08:29.426 fused_ordering(543) 00:08:29.426 fused_ordering(544) 00:08:29.426 fused_ordering(545) 00:08:29.426 fused_ordering(546) 00:08:29.426 fused_ordering(547) 00:08:29.426 fused_ordering(548) 00:08:29.426 fused_ordering(549) 00:08:29.426 fused_ordering(550) 00:08:29.426 fused_ordering(551) 00:08:29.426 fused_ordering(552) 00:08:29.426 fused_ordering(553) 00:08:29.426 fused_ordering(554) 00:08:29.426 fused_ordering(555) 00:08:29.426 fused_ordering(556) 00:08:29.426 fused_ordering(557) 00:08:29.426 fused_ordering(558) 00:08:29.426 fused_ordering(559) 00:08:29.426 fused_ordering(560) 00:08:29.426 fused_ordering(561) 00:08:29.426 fused_ordering(562) 00:08:29.426 fused_ordering(563) 00:08:29.426 fused_ordering(564) 00:08:29.426 fused_ordering(565) 00:08:29.426 fused_ordering(566) 00:08:29.426 fused_ordering(567) 00:08:29.426 fused_ordering(568) 00:08:29.426 fused_ordering(569) 00:08:29.426 fused_ordering(570) 00:08:29.426 fused_ordering(571) 00:08:29.426 fused_ordering(572) 00:08:29.426 fused_ordering(573) 00:08:29.426 fused_ordering(574) 00:08:29.426 fused_ordering(575) 00:08:29.426 fused_ordering(576) 00:08:29.426 fused_ordering(577) 00:08:29.426 fused_ordering(578) 00:08:29.426 fused_ordering(579) 00:08:29.426 fused_ordering(580) 00:08:29.426 fused_ordering(581) 00:08:29.426 fused_ordering(582) 00:08:29.426 fused_ordering(583) 00:08:29.426 fused_ordering(584) 00:08:29.426 fused_ordering(585) 00:08:29.426 fused_ordering(586) 00:08:29.426 fused_ordering(587) 00:08:29.426 fused_ordering(588) 00:08:29.426 fused_ordering(589) 00:08:29.426 fused_ordering(590) 00:08:29.426 fused_ordering(591) 00:08:29.426 fused_ordering(592) 00:08:29.426 fused_ordering(593) 00:08:29.426 fused_ordering(594) 00:08:29.426 fused_ordering(595) 00:08:29.426 fused_ordering(596) 00:08:29.426 fused_ordering(597) 00:08:29.426 fused_ordering(598) 00:08:29.426 fused_ordering(599) 00:08:29.426 fused_ordering(600) 00:08:29.426 fused_ordering(601) 00:08:29.426 fused_ordering(602) 00:08:29.426 fused_ordering(603) 00:08:29.426 fused_ordering(604) 00:08:29.426 fused_ordering(605) 00:08:29.426 fused_ordering(606) 00:08:29.426 fused_ordering(607) 00:08:29.426 fused_ordering(608) 00:08:29.426 fused_ordering(609) 00:08:29.426 fused_ordering(610) 00:08:29.426 fused_ordering(611) 00:08:29.426 fused_ordering(612) 00:08:29.426 fused_ordering(613) 00:08:29.426 fused_ordering(614) 00:08:29.426 fused_ordering(615) 00:08:30.360 fused_ordering(616) 00:08:30.360 fused_ordering(617) 00:08:30.360 fused_ordering(618) 00:08:30.360 fused_ordering(619) 00:08:30.360 fused_ordering(620) 00:08:30.360 fused_ordering(621) 00:08:30.360 fused_ordering(622) 00:08:30.360 fused_ordering(623) 00:08:30.360 fused_ordering(624) 00:08:30.360 fused_ordering(625) 00:08:30.360 fused_ordering(626) 00:08:30.360 fused_ordering(627) 00:08:30.360 fused_ordering(628) 00:08:30.360 fused_ordering(629) 00:08:30.360 fused_ordering(630) 00:08:30.360 fused_ordering(631) 00:08:30.360 fused_ordering(632) 00:08:30.360 fused_ordering(633) 00:08:30.360 fused_ordering(634) 00:08:30.360 fused_ordering(635) 00:08:30.360 fused_ordering(636) 00:08:30.360 fused_ordering(637) 00:08:30.360 fused_ordering(638) 00:08:30.360 fused_ordering(639) 00:08:30.360 fused_ordering(640) 00:08:30.360 fused_ordering(641) 00:08:30.360 fused_ordering(642) 00:08:30.360 fused_ordering(643) 00:08:30.360 fused_ordering(644) 00:08:30.360 fused_ordering(645) 00:08:30.360 fused_ordering(646) 00:08:30.360 fused_ordering(647) 00:08:30.360 fused_ordering(648) 00:08:30.360 fused_ordering(649) 00:08:30.360 fused_ordering(650) 00:08:30.360 fused_ordering(651) 00:08:30.360 fused_ordering(652) 00:08:30.360 fused_ordering(653) 00:08:30.360 fused_ordering(654) 00:08:30.360 fused_ordering(655) 00:08:30.360 fused_ordering(656) 00:08:30.360 fused_ordering(657) 00:08:30.360 fused_ordering(658) 00:08:30.360 fused_ordering(659) 00:08:30.360 fused_ordering(660) 00:08:30.360 fused_ordering(661) 00:08:30.360 fused_ordering(662) 00:08:30.360 fused_ordering(663) 00:08:30.360 fused_ordering(664) 00:08:30.360 fused_ordering(665) 00:08:30.360 fused_ordering(666) 00:08:30.360 fused_ordering(667) 00:08:30.360 fused_ordering(668) 00:08:30.360 fused_ordering(669) 00:08:30.360 fused_ordering(670) 00:08:30.360 fused_ordering(671) 00:08:30.360 fused_ordering(672) 00:08:30.360 fused_ordering(673) 00:08:30.360 fused_ordering(674) 00:08:30.360 fused_ordering(675) 00:08:30.360 fused_ordering(676) 00:08:30.360 fused_ordering(677) 00:08:30.360 fused_ordering(678) 00:08:30.360 fused_ordering(679) 00:08:30.360 fused_ordering(680) 00:08:30.360 fused_ordering(681) 00:08:30.360 fused_ordering(682) 00:08:30.360 fused_ordering(683) 00:08:30.360 fused_ordering(684) 00:08:30.360 fused_ordering(685) 00:08:30.360 fused_ordering(686) 00:08:30.360 fused_ordering(687) 00:08:30.360 fused_ordering(688) 00:08:30.360 fused_ordering(689) 00:08:30.360 fused_ordering(690) 00:08:30.360 fused_ordering(691) 00:08:30.360 fused_ordering(692) 00:08:30.360 fused_ordering(693) 00:08:30.360 fused_ordering(694) 00:08:30.360 fused_ordering(695) 00:08:30.360 fused_ordering(696) 00:08:30.360 fused_ordering(697) 00:08:30.360 fused_ordering(698) 00:08:30.360 fused_ordering(699) 00:08:30.360 fused_ordering(700) 00:08:30.360 fused_ordering(701) 00:08:30.360 fused_ordering(702) 00:08:30.360 fused_ordering(703) 00:08:30.360 fused_ordering(704) 00:08:30.360 fused_ordering(705) 00:08:30.360 fused_ordering(706) 00:08:30.360 fused_ordering(707) 00:08:30.360 fused_ordering(708) 00:08:30.360 fused_ordering(709) 00:08:30.360 fused_ordering(710) 00:08:30.360 fused_ordering(711) 00:08:30.360 fused_ordering(712) 00:08:30.360 fused_ordering(713) 00:08:30.360 fused_ordering(714) 00:08:30.360 fused_ordering(715) 00:08:30.360 fused_ordering(716) 00:08:30.360 fused_ordering(717) 00:08:30.360 fused_ordering(718) 00:08:30.360 fused_ordering(719) 00:08:30.360 fused_ordering(720) 00:08:30.360 fused_ordering(721) 00:08:30.360 fused_ordering(722) 00:08:30.360 fused_ordering(723) 00:08:30.360 fused_ordering(724) 00:08:30.360 fused_ordering(725) 00:08:30.360 fused_ordering(726) 00:08:30.360 fused_ordering(727) 00:08:30.360 fused_ordering(728) 00:08:30.360 fused_ordering(729) 00:08:30.360 fused_ordering(730) 00:08:30.360 fused_ordering(731) 00:08:30.360 fused_ordering(732) 00:08:30.360 fused_ordering(733) 00:08:30.360 fused_ordering(734) 00:08:30.360 fused_ordering(735) 00:08:30.360 fused_ordering(736) 00:08:30.360 fused_ordering(737) 00:08:30.360 fused_ordering(738) 00:08:30.360 fused_ordering(739) 00:08:30.360 fused_ordering(740) 00:08:30.360 fused_ordering(741) 00:08:30.360 fused_ordering(742) 00:08:30.360 fused_ordering(743) 00:08:30.360 fused_ordering(744) 00:08:30.360 fused_ordering(745) 00:08:30.360 fused_ordering(746) 00:08:30.360 fused_ordering(747) 00:08:30.360 fused_ordering(748) 00:08:30.360 fused_ordering(749) 00:08:30.360 fused_ordering(750) 00:08:30.360 fused_ordering(751) 00:08:30.360 fused_ordering(752) 00:08:30.360 fused_ordering(753) 00:08:30.360 fused_ordering(754) 00:08:30.360 fused_ordering(755) 00:08:30.360 fused_ordering(756) 00:08:30.360 fused_ordering(757) 00:08:30.360 fused_ordering(758) 00:08:30.360 fused_ordering(759) 00:08:30.360 fused_ordering(760) 00:08:30.360 fused_ordering(761) 00:08:30.360 fused_ordering(762) 00:08:30.360 fused_ordering(763) 00:08:30.360 fused_ordering(764) 00:08:30.360 fused_ordering(765) 00:08:30.360 fused_ordering(766) 00:08:30.360 fused_ordering(767) 00:08:30.360 fused_ordering(768) 00:08:30.360 fused_ordering(769) 00:08:30.360 fused_ordering(770) 00:08:30.360 fused_ordering(771) 00:08:30.360 fused_ordering(772) 00:08:30.360 fused_ordering(773) 00:08:30.360 fused_ordering(774) 00:08:30.360 fused_ordering(775) 00:08:30.360 fused_ordering(776) 00:08:30.360 fused_ordering(777) 00:08:30.360 fused_ordering(778) 00:08:30.360 fused_ordering(779) 00:08:30.360 fused_ordering(780) 00:08:30.360 fused_ordering(781) 00:08:30.360 fused_ordering(782) 00:08:30.360 fused_ordering(783) 00:08:30.360 fused_ordering(784) 00:08:30.360 fused_ordering(785) 00:08:30.360 fused_ordering(786) 00:08:30.360 fused_ordering(787) 00:08:30.360 fused_ordering(788) 00:08:30.360 fused_ordering(789) 00:08:30.360 fused_ordering(790) 00:08:30.360 fused_ordering(791) 00:08:30.360 fused_ordering(792) 00:08:30.360 fused_ordering(793) 00:08:30.360 fused_ordering(794) 00:08:30.360 fused_ordering(795) 00:08:30.360 fused_ordering(796) 00:08:30.360 fused_ordering(797) 00:08:30.360 fused_ordering(798) 00:08:30.360 fused_ordering(799) 00:08:30.360 fused_ordering(800) 00:08:30.360 fused_ordering(801) 00:08:30.360 fused_ordering(802) 00:08:30.360 fused_ordering(803) 00:08:30.360 fused_ordering(804) 00:08:30.360 fused_ordering(805) 00:08:30.360 fused_ordering(806) 00:08:30.360 fused_ordering(807) 00:08:30.360 fused_ordering(808) 00:08:30.360 fused_ordering(809) 00:08:30.360 fused_ordering(810) 00:08:30.360 fused_ordering(811) 00:08:30.360 fused_ordering(812) 00:08:30.360 fused_ordering(813) 00:08:30.360 fused_ordering(814) 00:08:30.360 fused_ordering(815) 00:08:30.360 fused_ordering(816) 00:08:30.360 fused_ordering(817) 00:08:30.360 fused_ordering(818) 00:08:30.360 fused_ordering(819) 00:08:30.360 fused_ordering(820) 00:08:30.927 fused_ordering(821) 00:08:30.927 fused_ordering(822) 00:08:30.927 fused_ordering(823) 00:08:30.927 fused_ordering(824) 00:08:30.927 fused_ordering(825) 00:08:30.927 fused_ordering(826) 00:08:30.927 fused_ordering(827) 00:08:30.927 fused_ordering(828) 00:08:30.927 fused_ordering(829) 00:08:30.927 fused_ordering(830) 00:08:30.927 fused_ordering(831) 00:08:30.927 fused_ordering(832) 00:08:30.927 fused_ordering(833) 00:08:30.927 fused_ordering(834) 00:08:30.927 fused_ordering(835) 00:08:30.927 fused_ordering(836) 00:08:30.927 fused_ordering(837) 00:08:30.927 fused_ordering(838) 00:08:30.927 fused_ordering(839) 00:08:30.927 fused_ordering(840) 00:08:30.927 fused_ordering(841) 00:08:30.927 fused_ordering(842) 00:08:30.927 fused_ordering(843) 00:08:30.927 fused_ordering(844) 00:08:30.927 fused_ordering(845) 00:08:30.927 fused_ordering(846) 00:08:30.927 fused_ordering(847) 00:08:30.927 fused_ordering(848) 00:08:30.927 fused_ordering(849) 00:08:30.927 fused_ordering(850) 00:08:30.927 fused_ordering(851) 00:08:30.927 fused_ordering(852) 00:08:30.927 fused_ordering(853) 00:08:30.927 fused_ordering(854) 00:08:30.927 fused_ordering(855) 00:08:30.927 fused_ordering(856) 00:08:30.927 fused_ordering(857) 00:08:30.927 fused_ordering(858) 00:08:30.927 fused_ordering(859) 00:08:30.927 fused_ordering(860) 00:08:30.927 fused_ordering(861) 00:08:30.927 fused_ordering(862) 00:08:30.927 fused_ordering(863) 00:08:30.927 fused_ordering(864) 00:08:30.927 fused_ordering(865) 00:08:30.927 fused_ordering(866) 00:08:30.927 fused_ordering(867) 00:08:30.927 fused_ordering(868) 00:08:30.927 fused_ordering(869) 00:08:30.927 fused_ordering(870) 00:08:30.927 fused_ordering(871) 00:08:30.927 fused_ordering(872) 00:08:30.927 fused_ordering(873) 00:08:30.927 fused_ordering(874) 00:08:30.927 fused_ordering(875) 00:08:30.927 fused_ordering(876) 00:08:30.927 fused_ordering(877) 00:08:30.927 fused_ordering(878) 00:08:30.927 fused_ordering(879) 00:08:30.927 fused_ordering(880) 00:08:30.927 fused_ordering(881) 00:08:30.927 fused_ordering(882) 00:08:30.927 fused_ordering(883) 00:08:30.927 fused_ordering(884) 00:08:30.927 fused_ordering(885) 00:08:30.927 fused_ordering(886) 00:08:30.927 fused_ordering(887) 00:08:30.927 fused_ordering(888) 00:08:30.927 fused_ordering(889) 00:08:30.927 fused_ordering(890) 00:08:30.927 fused_ordering(891) 00:08:30.927 fused_ordering(892) 00:08:30.927 fused_ordering(893) 00:08:30.927 fused_ordering(894) 00:08:30.927 fused_ordering(895) 00:08:30.927 fused_ordering(896) 00:08:30.927 fused_ordering(897) 00:08:30.927 fused_ordering(898) 00:08:30.927 fused_ordering(899) 00:08:30.927 fused_ordering(900) 00:08:30.927 fused_ordering(901) 00:08:30.927 fused_ordering(902) 00:08:30.927 fused_ordering(903) 00:08:30.927 fused_ordering(904) 00:08:30.927 fused_ordering(905) 00:08:30.927 fused_ordering(906) 00:08:30.927 fused_ordering(907) 00:08:30.927 fused_ordering(908) 00:08:30.927 fused_ordering(909) 00:08:30.927 fused_ordering(910) 00:08:30.927 fused_ordering(911) 00:08:30.927 fused_ordering(912) 00:08:30.927 fused_ordering(913) 00:08:30.927 fused_ordering(914) 00:08:30.927 fused_ordering(915) 00:08:30.927 fused_ordering(916) 00:08:30.927 fused_ordering(917) 00:08:30.927 fused_ordering(918) 00:08:30.927 fused_ordering(919) 00:08:30.928 fused_ordering(920) 00:08:30.928 fused_ordering(921) 00:08:30.928 fused_ordering(922) 00:08:30.928 fused_ordering(923) 00:08:30.928 fused_ordering(924) 00:08:30.928 fused_ordering(925) 00:08:30.928 fused_ordering(926) 00:08:30.928 fused_ordering(927) 00:08:30.928 fused_ordering(928) 00:08:30.928 fused_ordering(929) 00:08:30.928 fused_ordering(930) 00:08:30.928 fused_ordering(931) 00:08:30.928 fused_ordering(932) 00:08:30.928 fused_ordering(933) 00:08:30.928 fused_ordering(934) 00:08:30.928 fused_ordering(935) 00:08:30.928 fused_ordering(936) 00:08:30.928 fused_ordering(937) 00:08:30.928 fused_ordering(938) 00:08:30.928 fused_ordering(939) 00:08:30.928 fused_ordering(940) 00:08:30.928 fused_ordering(941) 00:08:30.928 fused_ordering(942) 00:08:30.928 fused_ordering(943) 00:08:30.928 fused_ordering(944) 00:08:30.928 fused_ordering(945) 00:08:30.928 fused_ordering(946) 00:08:30.928 fused_ordering(947) 00:08:30.928 fused_ordering(948) 00:08:30.928 fused_ordering(949) 00:08:30.928 fused_ordering(950) 00:08:30.928 fused_ordering(951) 00:08:30.928 fused_ordering(952) 00:08:30.928 fused_ordering(953) 00:08:30.928 fused_ordering(954) 00:08:30.928 fused_ordering(955) 00:08:30.928 fused_ordering(956) 00:08:30.928 fused_ordering(957) 00:08:30.928 fused_ordering(958) 00:08:30.928 fused_ordering(959) 00:08:30.928 fused_ordering(960) 00:08:30.928 fused_ordering(961) 00:08:30.928 fused_ordering(962) 00:08:30.928 fused_ordering(963) 00:08:30.928 fused_ordering(964) 00:08:30.928 fused_ordering(965) 00:08:30.928 fused_ordering(966) 00:08:30.928 fused_ordering(967) 00:08:30.928 fused_ordering(968) 00:08:30.928 fused_ordering(969) 00:08:30.928 fused_ordering(970) 00:08:30.928 fused_ordering(971) 00:08:30.928 fused_ordering(972) 00:08:30.928 fused_ordering(973) 00:08:30.928 fused_ordering(974) 00:08:30.928 fused_ordering(975) 00:08:30.928 fused_ordering(976) 00:08:30.928 fused_ordering(977) 00:08:30.928 fused_ordering(978) 00:08:30.928 fused_ordering(979) 00:08:30.928 fused_ordering(980) 00:08:30.928 fused_ordering(981) 00:08:30.928 fused_ordering(982) 00:08:30.928 fused_ordering(983) 00:08:30.928 fused_ordering(984) 00:08:30.928 fused_ordering(985) 00:08:30.928 fused_ordering(986) 00:08:30.928 fused_ordering(987) 00:08:30.928 fused_ordering(988) 00:08:30.928 fused_ordering(989) 00:08:30.928 fused_ordering(990) 00:08:30.928 fused_ordering(991) 00:08:30.928 fused_ordering(992) 00:08:30.928 fused_ordering(993) 00:08:30.928 fused_ordering(994) 00:08:30.928 fused_ordering(995) 00:08:30.928 fused_ordering(996) 00:08:30.928 fused_ordering(997) 00:08:30.928 fused_ordering(998) 00:08:30.928 fused_ordering(999) 00:08:30.928 fused_ordering(1000) 00:08:30.928 fused_ordering(1001) 00:08:30.928 fused_ordering(1002) 00:08:30.928 fused_ordering(1003) 00:08:30.928 fused_ordering(1004) 00:08:30.928 fused_ordering(1005) 00:08:30.928 fused_ordering(1006) 00:08:30.928 fused_ordering(1007) 00:08:30.928 fused_ordering(1008) 00:08:30.928 fused_ordering(1009) 00:08:30.928 fused_ordering(1010) 00:08:30.928 fused_ordering(1011) 00:08:30.928 fused_ordering(1012) 00:08:30.928 fused_ordering(1013) 00:08:30.928 fused_ordering(1014) 00:08:30.928 fused_ordering(1015) 00:08:30.928 fused_ordering(1016) 00:08:30.928 fused_ordering(1017) 00:08:30.928 fused_ordering(1018) 00:08:30.928 fused_ordering(1019) 00:08:30.928 fused_ordering(1020) 00:08:30.928 fused_ordering(1021) 00:08:30.928 fused_ordering(1022) 00:08:30.928 fused_ordering(1023) 00:08:30.928 03:22:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:30.928 03:22:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:30.928 03:22:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:30.928 03:22:08 -- nvmf/common.sh@117 -- # sync 00:08:30.928 03:22:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.928 03:22:08 -- nvmf/common.sh@120 -- # set +e 00:08:30.928 03:22:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.928 03:22:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.928 rmmod nvme_tcp 00:08:30.928 rmmod nvme_fabrics 00:08:30.928 rmmod nvme_keyring 00:08:30.928 03:22:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.928 03:22:08 -- nvmf/common.sh@124 -- # set -e 00:08:30.928 03:22:08 -- nvmf/common.sh@125 -- # return 0 00:08:30.928 03:22:08 -- nvmf/common.sh@478 -- # '[' -n 187629 ']' 00:08:30.928 03:22:08 -- nvmf/common.sh@479 -- # killprocess 187629 00:08:30.928 03:22:08 -- common/autotest_common.sh@936 -- # '[' -z 187629 ']' 00:08:30.928 03:22:08 -- common/autotest_common.sh@940 -- # kill -0 187629 00:08:30.928 03:22:08 -- common/autotest_common.sh@941 -- # uname 00:08:30.928 03:22:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.928 03:22:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 187629 00:08:31.187 03:22:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:31.187 03:22:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:31.187 03:22:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 187629' 00:08:31.187 killing process with pid 187629 00:08:31.187 03:22:08 -- common/autotest_common.sh@955 -- # kill 187629 00:08:31.187 03:22:08 -- common/autotest_common.sh@960 -- # wait 187629 00:08:31.446 03:22:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:31.446 03:22:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:31.446 03:22:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:31.446 03:22:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.446 03:22:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.446 03:22:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.446 03:22:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.446 03:22:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.354 03:22:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.354 00:08:33.354 real 0m8.389s 00:08:33.354 user 0m5.871s 00:08:33.354 sys 0m4.070s 00:08:33.354 03:22:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.354 03:22:10 -- common/autotest_common.sh@10 -- # set +x 00:08:33.354 ************************************ 00:08:33.354 END TEST nvmf_fused_ordering 00:08:33.354 ************************************ 00:08:33.354 03:22:10 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:33.354 03:22:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:33.354 03:22:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.354 03:22:10 -- common/autotest_common.sh@10 -- # set +x 00:08:33.613 ************************************ 00:08:33.613 START TEST nvmf_delete_subsystem 00:08:33.613 ************************************ 00:08:33.613 03:22:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:33.613 * Looking for test storage... 00:08:33.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.613 03:22:10 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.613 03:22:10 -- nvmf/common.sh@7 -- # uname -s 00:08:33.613 03:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.613 03:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.613 03:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.613 03:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.613 03:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.613 03:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.613 03:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.613 03:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.613 03:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.613 03:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.613 03:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.613 03:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.613 03:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.613 03:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.613 03:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.613 03:22:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.613 03:22:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.613 03:22:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.613 03:22:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.613 03:22:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.613 03:22:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.613 03:22:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.613 03:22:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.613 03:22:11 -- paths/export.sh@5 -- # export PATH 00:08:33.613 03:22:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.613 03:22:11 -- nvmf/common.sh@47 -- # : 0 00:08:33.613 03:22:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.613 03:22:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.613 03:22:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.613 03:22:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.613 03:22:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.613 03:22:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.613 03:22:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.613 03:22:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.613 03:22:11 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:33.613 03:22:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:33.613 03:22:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.613 03:22:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:33.613 03:22:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:33.613 03:22:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:33.613 03:22:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.613 03:22:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.613 03:22:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.613 03:22:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:33.613 03:22:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:33.613 03:22:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.613 03:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:36.145 03:22:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:36.145 03:22:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.145 03:22:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.145 03:22:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.145 03:22:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.145 03:22:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.145 03:22:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.145 03:22:13 -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.145 03:22:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.145 03:22:13 -- nvmf/common.sh@296 -- # e810=() 00:08:36.145 03:22:13 -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.145 03:22:13 -- nvmf/common.sh@297 -- # x722=() 00:08:36.145 03:22:13 -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.145 03:22:13 -- nvmf/common.sh@298 -- # mlx=() 00:08:36.145 03:22:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.145 03:22:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.145 03:22:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.145 03:22:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.145 03:22:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.145 03:22:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.145 03:22:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.145 03:22:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.145 03:22:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.145 03:22:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.145 03:22:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.145 03:22:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.145 03:22:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.145 03:22:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:36.145 03:22:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.145 03:22:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.145 03:22:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.145 03:22:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.145 03:22:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.145 03:22:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:36.145 03:22:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.145 03:22:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.145 03:22:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.145 03:22:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:36.146 03:22:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:36.146 03:22:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:36.146 03:22:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:36.146 03:22:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:36.146 03:22:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.146 03:22:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.146 03:22:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.146 03:22:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.146 03:22:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.146 03:22:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.146 03:22:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.146 03:22:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.146 03:22:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.146 03:22:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.146 03:22:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.146 03:22:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.146 03:22:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.146 03:22:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.146 03:22:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.146 03:22:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.146 03:22:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.146 03:22:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.146 03:22:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.146 03:22:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:36.146 00:08:36.146 --- 10.0.0.2 ping statistics --- 00:08:36.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.146 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:36.146 03:22:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:08:36.146 00:08:36.146 --- 10.0.0.1 ping statistics --- 00:08:36.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.146 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:36.146 03:22:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.146 03:22:13 -- nvmf/common.sh@411 -- # return 0 00:08:36.146 03:22:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:36.146 03:22:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.146 03:22:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:36.146 03:22:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:36.146 03:22:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.146 03:22:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:36.146 03:22:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:36.146 03:22:13 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:36.146 03:22:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:36.146 03:22:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 03:22:13 -- nvmf/common.sh@470 -- # nvmfpid=190032 00:08:36.146 03:22:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:36.146 03:22:13 -- nvmf/common.sh@471 -- # waitforlisten 190032 00:08:36.146 03:22:13 -- common/autotest_common.sh@817 -- # '[' -z 190032 ']' 00:08:36.146 03:22:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.146 03:22:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:36.146 03:22:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.146 03:22:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 [2024-04-19 03:22:13.295879] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:08:36.146 [2024-04-19 03:22:13.295959] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.146 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.146 [2024-04-19 03:22:13.361786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.146 [2024-04-19 03:22:13.471223] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.146 [2024-04-19 03:22:13.471284] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.146 [2024-04-19 03:22:13.471297] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.146 [2024-04-19 03:22:13.471308] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.146 [2024-04-19 03:22:13.471317] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.146 [2024-04-19 03:22:13.471403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.146 [2024-04-19 03:22:13.471408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.146 03:22:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:36.146 03:22:13 -- common/autotest_common.sh@850 -- # return 0 00:08:36.146 03:22:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:36.146 03:22:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 03:22:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.146 03:22:13 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.146 03:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 [2024-04-19 03:22:13.618350] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.146 03:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.146 03:22:13 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.146 03:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 03:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.146 03:22:13 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.146 03:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 [2024-04-19 03:22:13.634599] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.146 03:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.146 03:22:13 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:36.146 03:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 NULL1 00:08:36.146 03:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.146 03:22:13 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.146 03:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.146 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 Delay0 00:08:36.147 03:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.147 03:22:13 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.147 03:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.147 03:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.147 03:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.147 03:22:13 -- target/delete_subsystem.sh@28 -- # perf_pid=190066 00:08:36.147 03:22:13 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:36.147 03:22:13 -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:36.147 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.404 [2024-04-19 03:22:13.709285] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:38.302 03:22:15 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.302 03:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.302 03:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:38.559 Write completed with error (sct=0, sc=8) 00:08:38.559 Read completed with error (sct=0, sc=8) 00:08:38.559 Read completed with error (sct=0, sc=8) 00:08:38.559 starting I/O failed: -6 00:08:38.559 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 [2024-04-19 03:22:15.921402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0924000c00 is same with the state(5) to be set 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Write completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 starting I/O failed: -6 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.560 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 starting I/O failed: -6 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 starting I/O failed: -6 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 [2024-04-19 03:22:15.922252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5ba0 is same with the state(5) to be set 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Read completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:38.561 Write completed with error (sct=0, sc=8) 00:08:39.492 [2024-04-19 03:22:16.889021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe4120 is same with the state(5) to be set 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 [2024-04-19 03:22:16.922303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092400c250 is same with the state(5) to be set 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 [2024-04-19 03:22:16.923052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5880 is same with the state(5) to be set 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Write completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 [2024-04-19 03:22:16.923313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5d30 is same with the state(5) to be set 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.492 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Write completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 Read completed with error (sct=0, sc=8) 00:08:39.493 [2024-04-19 03:22:16.923615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a10 is same with the state(5) to be set 00:08:39.493 [2024-04-19 03:22:16.924478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe4120 (9): Bad file descriptor 00:08:39.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:39.493 03:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.493 03:22:16 -- target/delete_subsystem.sh@34 -- # delay=0 00:08:39.493 03:22:16 -- target/delete_subsystem.sh@35 -- # kill -0 190066 00:08:39.493 03:22:16 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:39.493 Initializing NVMe Controllers 00:08:39.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.493 Controller IO queue size 128, less than required. 00:08:39.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:39.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:39.493 Initialization complete. Launching workers. 00:08:39.493 ======================================================== 00:08:39.493 Latency(us) 00:08:39.493 Device Information : IOPS MiB/s Average min max 00:08:39.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.16 0.08 1067100.80 2158.78 2002973.35 00:08:39.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.33 0.07 909022.75 592.01 1012885.41 00:08:39.493 ======================================================== 00:08:39.493 Total : 316.49 0.15 993512.74 592.01 2002973.35 00:08:39.493 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@35 -- # kill -0 190066 00:08:40.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (190066) - No such process 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@45 -- # NOT wait 190066 00:08:40.057 03:22:17 -- common/autotest_common.sh@638 -- # local es=0 00:08:40.057 03:22:17 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 190066 00:08:40.057 03:22:17 -- common/autotest_common.sh@626 -- # local arg=wait 00:08:40.057 03:22:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:40.057 03:22:17 -- common/autotest_common.sh@630 -- # type -t wait 00:08:40.057 03:22:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:40.057 03:22:17 -- common/autotest_common.sh@641 -- # wait 190066 00:08:40.057 03:22:17 -- common/autotest_common.sh@641 -- # es=1 00:08:40.057 03:22:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:40.057 03:22:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:40.057 03:22:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.057 03:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.057 03:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:40.057 03:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.057 03:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.057 03:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:40.057 [2024-04-19 03:22:17.446532] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.057 03:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.057 03:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.057 03:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:40.057 03:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@54 -- # perf_pid=190584 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@56 -- # delay=0 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:40.057 03:22:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.057 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.057 [2024-04-19 03:22:17.512345] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:40.621 03:22:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.621 03:22:17 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:40.621 03:22:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.186 03:22:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.186 03:22:18 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:41.186 03:22:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.443 03:22:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.443 03:22:18 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:41.443 03:22:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.012 03:22:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.012 03:22:19 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:42.012 03:22:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.609 03:22:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.609 03:22:19 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:42.609 03:22:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.174 03:22:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.174 03:22:20 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:43.174 03:22:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.432 Initializing NVMe Controllers 00:08:43.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.432 Controller IO queue size 128, less than required. 00:08:43.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.432 Initialization complete. Launching workers. 00:08:43.432 ======================================================== 00:08:43.432 Latency(us) 00:08:43.432 Device Information : IOPS MiB/s Average min max 00:08:43.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004824.98 1000228.80 1012781.63 00:08:43.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004603.75 1000220.91 1042965.79 00:08:43.432 ======================================================== 00:08:43.432 Total : 256.00 0.12 1004714.37 1000220.91 1042965.79 00:08:43.432 00:08:43.432 03:22:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.432 03:22:20 -- target/delete_subsystem.sh@57 -- # kill -0 190584 00:08:43.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (190584) - No such process 00:08:43.432 03:22:20 -- target/delete_subsystem.sh@67 -- # wait 190584 00:08:43.432 03:22:20 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:43.432 03:22:20 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:43.432 03:22:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:43.432 03:22:20 -- nvmf/common.sh@117 -- # sync 00:08:43.432 03:22:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.432 03:22:20 -- nvmf/common.sh@120 -- # set +e 00:08:43.432 03:22:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.432 03:22:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.432 rmmod nvme_tcp 00:08:43.690 rmmod nvme_fabrics 00:08:43.690 rmmod nvme_keyring 00:08:43.690 03:22:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.690 03:22:21 -- nvmf/common.sh@124 -- # set -e 00:08:43.690 03:22:21 -- nvmf/common.sh@125 -- # return 0 00:08:43.690 03:22:21 -- nvmf/common.sh@478 -- # '[' -n 190032 ']' 00:08:43.690 03:22:21 -- nvmf/common.sh@479 -- # killprocess 190032 00:08:43.690 03:22:21 -- common/autotest_common.sh@936 -- # '[' -z 190032 ']' 00:08:43.690 03:22:21 -- common/autotest_common.sh@940 -- # kill -0 190032 00:08:43.690 03:22:21 -- common/autotest_common.sh@941 -- # uname 00:08:43.690 03:22:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:43.690 03:22:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 190032 00:08:43.690 03:22:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:43.690 03:22:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:43.690 03:22:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 190032' 00:08:43.690 killing process with pid 190032 00:08:43.690 03:22:21 -- common/autotest_common.sh@955 -- # kill 190032 00:08:43.690 03:22:21 -- common/autotest_common.sh@960 -- # wait 190032 00:08:43.948 03:22:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:43.948 03:22:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:43.948 03:22:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:43.948 03:22:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.948 03:22:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.948 03:22:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.948 03:22:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.948 03:22:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.851 03:22:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.851 00:08:45.851 real 0m12.462s 00:08:45.851 user 0m28.006s 00:08:45.851 sys 0m3.099s 00:08:45.851 03:22:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.851 03:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:45.851 ************************************ 00:08:45.851 END TEST nvmf_delete_subsystem 00:08:45.851 ************************************ 00:08:46.109 03:22:23 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:46.109 03:22:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:46.109 03:22:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.109 03:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.109 ************************************ 00:08:46.109 START TEST nvmf_ns_masking 00:08:46.109 ************************************ 00:08:46.109 03:22:23 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:46.109 * Looking for test storage... 00:08:46.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.109 03:22:23 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.109 03:22:23 -- nvmf/common.sh@7 -- # uname -s 00:08:46.109 03:22:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.109 03:22:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.109 03:22:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.109 03:22:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.109 03:22:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.109 03:22:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.109 03:22:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.109 03:22:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.109 03:22:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.109 03:22:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.109 03:22:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.109 03:22:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.109 03:22:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.109 03:22:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.109 03:22:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.109 03:22:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.109 03:22:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.109 03:22:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.109 03:22:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.109 03:22:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.109 03:22:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.109 03:22:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.109 03:22:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.109 03:22:23 -- paths/export.sh@5 -- # export PATH 00:08:46.109 03:22:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.109 03:22:23 -- nvmf/common.sh@47 -- # : 0 00:08:46.109 03:22:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.109 03:22:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.109 03:22:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.109 03:22:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.109 03:22:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.109 03:22:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.109 03:22:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.109 03:22:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.109 03:22:23 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.109 03:22:23 -- target/ns_masking.sh@11 -- # loops=5 00:08:46.109 03:22:23 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:46.109 03:22:23 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:08:46.109 03:22:23 -- target/ns_masking.sh@15 -- # uuidgen 00:08:46.109 03:22:23 -- target/ns_masking.sh@15 -- # HOSTID=a413382a-8a59-43ba-a1f6-f95d1d26d25a 00:08:46.109 03:22:23 -- target/ns_masking.sh@44 -- # nvmftestinit 00:08:46.109 03:22:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:46.109 03:22:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.109 03:22:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:46.109 03:22:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:46.109 03:22:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:46.109 03:22:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.109 03:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.109 03:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.109 03:22:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:46.109 03:22:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:46.109 03:22:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.109 03:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:48.012 03:22:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:48.012 03:22:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.012 03:22:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.012 03:22:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.012 03:22:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.012 03:22:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.012 03:22:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.012 03:22:25 -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.012 03:22:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.012 03:22:25 -- nvmf/common.sh@296 -- # e810=() 00:08:48.012 03:22:25 -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.012 03:22:25 -- nvmf/common.sh@297 -- # x722=() 00:08:48.012 03:22:25 -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.012 03:22:25 -- nvmf/common.sh@298 -- # mlx=() 00:08:48.012 03:22:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.012 03:22:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.012 03:22:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.012 03:22:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.012 03:22:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.012 03:22:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.012 03:22:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.012 03:22:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.012 03:22:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.012 03:22:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.012 03:22:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.012 03:22:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.012 03:22:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:48.012 03:22:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.012 03:22:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.012 03:22:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.012 03:22:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.012 03:22:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.012 03:22:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:48.012 03:22:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.012 03:22:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.012 03:22:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.012 03:22:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:48.012 03:22:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:48.012 03:22:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:48.012 03:22:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:48.012 03:22:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.012 03:22:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.012 03:22:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.012 03:22:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.012 03:22:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.012 03:22:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.012 03:22:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.012 03:22:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.012 03:22:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.012 03:22:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.012 03:22:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.012 03:22:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.271 03:22:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.271 03:22:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.271 03:22:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.271 03:22:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.271 03:22:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.271 03:22:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.271 03:22:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.271 03:22:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:08:48.271 00:08:48.271 --- 10.0.0.2 ping statistics --- 00:08:48.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.271 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:48.271 03:22:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:08:48.271 00:08:48.271 --- 10.0.0.1 ping statistics --- 00:08:48.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.271 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:48.271 03:22:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.271 03:22:25 -- nvmf/common.sh@411 -- # return 0 00:08:48.271 03:22:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:48.271 03:22:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.271 03:22:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:48.271 03:22:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:48.271 03:22:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.271 03:22:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:48.271 03:22:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:48.271 03:22:25 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:08:48.271 03:22:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:48.271 03:22:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:48.271 03:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:48.271 03:22:25 -- nvmf/common.sh@470 -- # nvmfpid=192936 00:08:48.271 03:22:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.271 03:22:25 -- nvmf/common.sh@471 -- # waitforlisten 192936 00:08:48.271 03:22:25 -- common/autotest_common.sh@817 -- # '[' -z 192936 ']' 00:08:48.271 03:22:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.271 03:22:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:48.271 03:22:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.271 03:22:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:48.271 03:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:48.271 [2024-04-19 03:22:25.758350] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:08:48.271 [2024-04-19 03:22:25.758460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.271 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.271 [2024-04-19 03:22:25.824872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.530 [2024-04-19 03:22:25.936778] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.530 [2024-04-19 03:22:25.936840] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.530 [2024-04-19 03:22:25.936854] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.530 [2024-04-19 03:22:25.936865] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.530 [2024-04-19 03:22:25.936874] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.530 [2024-04-19 03:22:25.936926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.530 [2024-04-19 03:22:25.936985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.530 [2024-04-19 03:22:25.937051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.530 [2024-04-19 03:22:25.937054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.530 03:22:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:48.530 03:22:26 -- common/autotest_common.sh@850 -- # return 0 00:08:48.530 03:22:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:48.530 03:22:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:48.530 03:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 03:22:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.787 03:22:26 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.045 [2024-04-19 03:22:26.352232] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.045 03:22:26 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:08:49.045 03:22:26 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:08:49.045 03:22:26 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:49.304 Malloc1 00:08:49.304 03:22:26 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:49.562 Malloc2 00:08:49.562 03:22:26 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.820 03:22:27 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:50.078 03:22:27 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.078 [2024-04-19 03:22:27.614787] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.078 03:22:27 -- target/ns_masking.sh@61 -- # connect 00:08:50.078 03:22:27 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a413382a-8a59-43ba-a1f6-f95d1d26d25a -a 10.0.0.2 -s 4420 -i 4 00:08:50.336 03:22:27 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.336 03:22:27 -- common/autotest_common.sh@1184 -- # local i=0 00:08:50.336 03:22:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.336 03:22:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:50.336 03:22:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:52.861 03:22:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:52.861 03:22:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:52.861 03:22:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.861 03:22:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:52.861 03:22:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.861 03:22:29 -- common/autotest_common.sh@1194 -- # return 0 00:08:52.861 03:22:29 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:08:52.861 03:22:29 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:52.861 03:22:29 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:08:52.861 03:22:29 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:08:52.861 03:22:29 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:08:52.861 03:22:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:52.861 03:22:29 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:52.861 [ 0]:0x1 00:08:52.861 03:22:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:52.861 03:22:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:52.861 03:22:29 -- target/ns_masking.sh@40 -- # nguid=b747166d0d14418ba881373720bced97 00:08:52.861 03:22:29 -- target/ns_masking.sh@41 -- # [[ b747166d0d14418ba881373720bced97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:52.861 03:22:29 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:52.861 03:22:30 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:08:52.861 03:22:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:52.861 03:22:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:52.861 [ 0]:0x1 00:08:52.861 03:22:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:52.861 03:22:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:52.861 03:22:30 -- target/ns_masking.sh@40 -- # nguid=b747166d0d14418ba881373720bced97 00:08:52.861 03:22:30 -- target/ns_masking.sh@41 -- # [[ b747166d0d14418ba881373720bced97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:52.861 03:22:30 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:08:52.861 03:22:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:52.861 03:22:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:52.861 [ 1]:0x2 00:08:52.861 03:22:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:52.861 03:22:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:52.861 03:22:30 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:52.861 03:22:30 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:52.861 03:22:30 -- target/ns_masking.sh@69 -- # disconnect 00:08:52.861 03:22:30 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:52.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.861 03:22:30 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.118 03:22:30 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:53.375 03:22:30 -- target/ns_masking.sh@77 -- # connect 1 00:08:53.375 03:22:30 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a413382a-8a59-43ba-a1f6-f95d1d26d25a -a 10.0.0.2 -s 4420 -i 4 00:08:53.632 03:22:30 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:53.632 03:22:30 -- common/autotest_common.sh@1184 -- # local i=0 00:08:53.632 03:22:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:53.632 03:22:30 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:08:53.632 03:22:30 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:08:53.632 03:22:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:55.527 03:22:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:55.527 03:22:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:55.527 03:22:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.527 03:22:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:55.527 03:22:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.527 03:22:33 -- common/autotest_common.sh@1194 -- # return 0 00:08:55.527 03:22:33 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:08:55.527 03:22:33 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:55.527 03:22:33 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:08:55.527 03:22:33 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:08:55.527 03:22:33 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:08:55.527 03:22:33 -- common/autotest_common.sh@638 -- # local es=0 00:08:55.527 03:22:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:08:55.527 03:22:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:08:55.527 03:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.527 03:22:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:08:55.527 03:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.527 03:22:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:08:55.527 03:22:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:55.527 03:22:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:55.527 03:22:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:55.527 03:22:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:55.785 03:22:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:08:55.785 03:22:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.785 03:22:33 -- common/autotest_common.sh@641 -- # es=1 00:08:55.785 03:22:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:55.785 03:22:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:55.785 03:22:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:55.785 03:22:33 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:08:55.785 03:22:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:55.785 03:22:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:55.785 [ 0]:0x2 00:08:55.785 03:22:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:55.785 03:22:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:55.785 03:22:33 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:55.785 03:22:33 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.785 03:22:33 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:56.042 03:22:33 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:08:56.042 03:22:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:56.042 03:22:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:56.042 [ 0]:0x1 00:08:56.042 03:22:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.042 03:22:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:56.042 03:22:33 -- target/ns_masking.sh@40 -- # nguid=b747166d0d14418ba881373720bced97 00:08:56.042 03:22:33 -- target/ns_masking.sh@41 -- # [[ b747166d0d14418ba881373720bced97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.042 03:22:33 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:08:56.042 03:22:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:56.042 03:22:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:56.042 [ 1]:0x2 00:08:56.042 03:22:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.042 03:22:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:56.043 03:22:33 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:56.043 03:22:33 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.043 03:22:33 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:56.300 03:22:33 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:08:56.300 03:22:33 -- common/autotest_common.sh@638 -- # local es=0 00:08:56.300 03:22:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:08:56.300 03:22:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:08:56.300 03:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.300 03:22:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:08:56.300 03:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.300 03:22:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:08:56.300 03:22:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:56.300 03:22:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:56.300 03:22:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.300 03:22:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:56.300 03:22:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:08:56.300 03:22:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.300 03:22:33 -- common/autotest_common.sh@641 -- # es=1 00:08:56.300 03:22:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:56.301 03:22:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:56.301 03:22:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:56.301 03:22:33 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:08:56.301 03:22:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:56.301 03:22:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:56.301 [ 0]:0x2 00:08:56.301 03:22:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.301 03:22:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:56.301 03:22:33 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:56.301 03:22:33 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.301 03:22:33 -- target/ns_masking.sh@91 -- # disconnect 00:08:56.301 03:22:33 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.557 03:22:33 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:56.815 03:22:34 -- target/ns_masking.sh@95 -- # connect 2 00:08:56.815 03:22:34 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a413382a-8a59-43ba-a1f6-f95d1d26d25a -a 10.0.0.2 -s 4420 -i 4 00:08:56.815 03:22:34 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:56.815 03:22:34 -- common/autotest_common.sh@1184 -- # local i=0 00:08:56.815 03:22:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.815 03:22:34 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:08:56.815 03:22:34 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:08:56.815 03:22:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:59.339 03:22:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:59.339 03:22:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:59.339 03:22:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.339 03:22:36 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:08:59.339 03:22:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.339 03:22:36 -- common/autotest_common.sh@1194 -- # return 0 00:08:59.339 03:22:36 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:08:59.339 03:22:36 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:59.339 03:22:36 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:08:59.339 03:22:36 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:08:59.339 03:22:36 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:08:59.339 03:22:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.339 03:22:36 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:59.339 [ 0]:0x1 00:08:59.339 03:22:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:59.339 03:22:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.339 03:22:36 -- target/ns_masking.sh@40 -- # nguid=b747166d0d14418ba881373720bced97 00:08:59.339 03:22:36 -- target/ns_masking.sh@41 -- # [[ b747166d0d14418ba881373720bced97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.339 03:22:36 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:08:59.339 03:22:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.339 03:22:36 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:59.339 [ 1]:0x2 00:08:59.339 03:22:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:59.339 03:22:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.339 03:22:36 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:59.339 03:22:36 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.339 03:22:36 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:59.596 03:22:36 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:08:59.596 03:22:36 -- common/autotest_common.sh@638 -- # local es=0 00:08:59.596 03:22:36 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:08:59.596 03:22:36 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:08:59.596 03:22:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.596 03:22:36 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:08:59.596 03:22:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.596 03:22:36 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:08:59.596 03:22:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.596 03:22:36 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:59.596 03:22:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:59.596 03:22:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.596 03:22:36 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:08:59.596 03:22:36 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.596 03:22:36 -- common/autotest_common.sh@641 -- # es=1 00:08:59.596 03:22:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:59.596 03:22:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:59.597 03:22:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:59.597 03:22:36 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:08:59.597 03:22:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.597 03:22:36 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:59.597 [ 0]:0x2 00:08:59.597 03:22:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:59.597 03:22:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.597 03:22:37 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:59.597 03:22:37 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.597 03:22:37 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:59.597 03:22:37 -- common/autotest_common.sh@638 -- # local es=0 00:08:59.597 03:22:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:59.597 03:22:37 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.597 03:22:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.597 03:22:37 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.597 03:22:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.597 03:22:37 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.597 03:22:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.597 03:22:37 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.597 03:22:37 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:59.597 03:22:37 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:59.854 [2024-04-19 03:22:37.234337] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:08:59.854 request: 00:08:59.854 { 00:08:59.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.854 "nsid": 2, 00:08:59.854 "host": "nqn.2016-06.io.spdk:host1", 00:08:59.854 "method": "nvmf_ns_remove_host", 00:08:59.854 "req_id": 1 00:08:59.854 } 00:08:59.854 Got JSON-RPC error response 00:08:59.854 response: 00:08:59.854 { 00:08:59.854 "code": -32602, 00:08:59.854 "message": "Invalid parameters" 00:08:59.854 } 00:08:59.854 03:22:37 -- common/autotest_common.sh@641 -- # es=1 00:08:59.854 03:22:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:59.854 03:22:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:59.854 03:22:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:59.854 03:22:37 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:08:59.854 03:22:37 -- common/autotest_common.sh@638 -- # local es=0 00:08:59.854 03:22:37 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:08:59.854 03:22:37 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:08:59.854 03:22:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.854 03:22:37 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:08:59.854 03:22:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.854 03:22:37 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:08:59.854 03:22:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.854 03:22:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:59.854 03:22:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:59.854 03:22:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.854 03:22:37 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:08:59.854 03:22:37 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.854 03:22:37 -- common/autotest_common.sh@641 -- # es=1 00:08:59.854 03:22:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:59.854 03:22:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:59.854 03:22:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:59.854 03:22:37 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:08:59.854 03:22:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.854 03:22:37 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:59.854 [ 0]:0x2 00:08:59.854 03:22:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:59.854 03:22:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.854 03:22:37 -- target/ns_masking.sh@40 -- # nguid=3d4f8986c90b456292092b00d72c9411 00:08:59.854 03:22:37 -- target/ns_masking.sh@41 -- # [[ 3d4f8986c90b456292092b00d72c9411 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.854 03:22:37 -- target/ns_masking.sh@108 -- # disconnect 00:08:59.854 03:22:37 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.854 03:22:37 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.111 03:22:37 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:00.111 03:22:37 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:00.111 03:22:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:00.111 03:22:37 -- nvmf/common.sh@117 -- # sync 00:09:00.111 03:22:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.111 03:22:37 -- nvmf/common.sh@120 -- # set +e 00:09:00.111 03:22:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.111 03:22:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.111 rmmod nvme_tcp 00:09:00.111 rmmod nvme_fabrics 00:09:00.111 rmmod nvme_keyring 00:09:00.111 03:22:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.369 03:22:37 -- nvmf/common.sh@124 -- # set -e 00:09:00.369 03:22:37 -- nvmf/common.sh@125 -- # return 0 00:09:00.369 03:22:37 -- nvmf/common.sh@478 -- # '[' -n 192936 ']' 00:09:00.369 03:22:37 -- nvmf/common.sh@479 -- # killprocess 192936 00:09:00.369 03:22:37 -- common/autotest_common.sh@936 -- # '[' -z 192936 ']' 00:09:00.369 03:22:37 -- common/autotest_common.sh@940 -- # kill -0 192936 00:09:00.369 03:22:37 -- common/autotest_common.sh@941 -- # uname 00:09:00.369 03:22:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.369 03:22:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 192936 00:09:00.369 03:22:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.369 03:22:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.369 03:22:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 192936' 00:09:00.369 killing process with pid 192936 00:09:00.369 03:22:37 -- common/autotest_common.sh@955 -- # kill 192936 00:09:00.369 03:22:37 -- common/autotest_common.sh@960 -- # wait 192936 00:09:00.629 03:22:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:00.629 03:22:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:00.629 03:22:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:00.629 03:22:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.629 03:22:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.629 03:22:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.629 03:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.629 03:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.620 03:22:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.620 00:09:02.620 real 0m16.551s 00:09:02.620 user 0m51.402s 00:09:02.620 sys 0m3.738s 00:09:02.620 03:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.620 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.621 ************************************ 00:09:02.621 END TEST nvmf_ns_masking 00:09:02.621 ************************************ 00:09:02.621 03:22:40 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:02.621 03:22:40 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:02.621 03:22:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.621 03:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.621 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.887 ************************************ 00:09:02.887 START TEST nvmf_nvme_cli 00:09:02.887 ************************************ 00:09:02.887 03:22:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:02.887 * Looking for test storage... 00:09:02.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.887 03:22:40 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.887 03:22:40 -- nvmf/common.sh@7 -- # uname -s 00:09:02.887 03:22:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.887 03:22:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.887 03:22:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.887 03:22:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.887 03:22:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.887 03:22:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.887 03:22:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.887 03:22:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.887 03:22:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.887 03:22:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.887 03:22:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.887 03:22:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.887 03:22:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.887 03:22:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.887 03:22:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.887 03:22:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.887 03:22:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.887 03:22:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.887 03:22:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.887 03:22:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.887 03:22:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.887 03:22:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.887 03:22:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.887 03:22:40 -- paths/export.sh@5 -- # export PATH 00:09:02.887 03:22:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.887 03:22:40 -- nvmf/common.sh@47 -- # : 0 00:09:02.887 03:22:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.887 03:22:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.887 03:22:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.887 03:22:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.887 03:22:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.887 03:22:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.887 03:22:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.887 03:22:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.887 03:22:40 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.887 03:22:40 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.887 03:22:40 -- target/nvme_cli.sh@14 -- # devs=() 00:09:02.887 03:22:40 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:02.887 03:22:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:02.887 03:22:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.887 03:22:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:02.887 03:22:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:02.887 03:22:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:02.887 03:22:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.887 03:22:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.887 03:22:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.887 03:22:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:02.887 03:22:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:02.887 03:22:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.887 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:09:05.421 03:22:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:05.421 03:22:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.421 03:22:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.421 03:22:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.421 03:22:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.421 03:22:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.421 03:22:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.421 03:22:42 -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.421 03:22:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.421 03:22:42 -- nvmf/common.sh@296 -- # e810=() 00:09:05.421 03:22:42 -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.421 03:22:42 -- nvmf/common.sh@297 -- # x722=() 00:09:05.421 03:22:42 -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.421 03:22:42 -- nvmf/common.sh@298 -- # mlx=() 00:09:05.421 03:22:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.421 03:22:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.421 03:22:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.421 03:22:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.421 03:22:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.421 03:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.421 03:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:05.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:05.421 03:22:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.421 03:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:05.421 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:05.421 03:22:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.421 03:22:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.421 03:22:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.422 03:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.422 03:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.422 03:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:05.422 03:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.422 03:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:05.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:05.422 03:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.422 03:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.422 03:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.422 03:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:05.422 03:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.422 03:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:05.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:05.422 03:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.422 03:22:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:05.422 03:22:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:05.422 03:22:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:05.422 03:22:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:05.422 03:22:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:05.422 03:22:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.422 03:22:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.422 03:22:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.422 03:22:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.422 03:22:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.422 03:22:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.422 03:22:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.422 03:22:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.422 03:22:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.422 03:22:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.422 03:22:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.422 03:22:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.422 03:22:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.422 03:22:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.422 03:22:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.422 03:22:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.422 03:22:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.422 03:22:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.422 03:22:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.422 03:22:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:05.422 00:09:05.422 --- 10.0.0.2 ping statistics --- 00:09:05.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.422 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:05.422 03:22:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:09:05.422 00:09:05.422 --- 10.0.0.1 ping statistics --- 00:09:05.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.422 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:05.422 03:22:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.422 03:22:42 -- nvmf/common.sh@411 -- # return 0 00:09:05.422 03:22:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:05.422 03:22:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.422 03:22:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:05.422 03:22:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:05.422 03:22:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.422 03:22:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:05.422 03:22:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:05.422 03:22:42 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:05.422 03:22:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:05.422 03:22:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 03:22:42 -- nvmf/common.sh@470 -- # nvmfpid=196499 00:09:05.422 03:22:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.422 03:22:42 -- nvmf/common.sh@471 -- # waitforlisten 196499 00:09:05.422 03:22:42 -- common/autotest_common.sh@817 -- # '[' -z 196499 ']' 00:09:05.422 03:22:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.422 03:22:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:05.422 03:22:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.422 03:22:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 [2024-04-19 03:22:42.553343] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:09:05.422 [2024-04-19 03:22:42.553439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.422 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.422 [2024-04-19 03:22:42.620373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.422 [2024-04-19 03:22:42.743793] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.422 [2024-04-19 03:22:42.743858] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.422 [2024-04-19 03:22:42.743871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.422 [2024-04-19 03:22:42.743883] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.422 [2024-04-19 03:22:42.743892] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.422 [2024-04-19 03:22:42.743995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.422 [2024-04-19 03:22:42.744067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.422 [2024-04-19 03:22:42.744137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.422 [2024-04-19 03:22:42.744140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.422 03:22:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:05.422 03:22:42 -- common/autotest_common.sh@850 -- # return 0 00:09:05.422 03:22:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:05.422 03:22:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 03:22:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.422 03:22:42 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.422 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 [2024-04-19 03:22:42.903181] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.422 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.422 03:22:42 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:05.422 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 Malloc0 00:09:05.422 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.422 03:22:42 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:05.422 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 Malloc1 00:09:05.422 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.422 03:22:42 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:05.422 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.422 03:22:42 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.422 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.422 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.422 03:22:42 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.422 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.422 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.680 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.680 03:22:42 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.680 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.680 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.680 [2024-04-19 03:22:42.989530] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.680 03:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.680 03:22:42 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.680 03:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.680 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.680 03:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.680 03:22:43 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:05.680 00:09:05.680 Discovery Log Number of Records 2, Generation counter 2 00:09:05.680 =====Discovery Log Entry 0====== 00:09:05.680 trtype: tcp 00:09:05.680 adrfam: ipv4 00:09:05.680 subtype: current discovery subsystem 00:09:05.680 treq: not required 00:09:05.680 portid: 0 00:09:05.680 trsvcid: 4420 00:09:05.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:05.680 traddr: 10.0.0.2 00:09:05.680 eflags: explicit discovery connections, duplicate discovery information 00:09:05.680 sectype: none 00:09:05.680 =====Discovery Log Entry 1====== 00:09:05.680 trtype: tcp 00:09:05.680 adrfam: ipv4 00:09:05.680 subtype: nvme subsystem 00:09:05.680 treq: not required 00:09:05.680 portid: 0 00:09:05.680 trsvcid: 4420 00:09:05.680 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:05.680 traddr: 10.0.0.2 00:09:05.680 eflags: none 00:09:05.680 sectype: none 00:09:05.680 03:22:43 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:05.680 03:22:43 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:05.680 03:22:43 -- nvmf/common.sh@511 -- # local dev _ 00:09:05.680 03:22:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:05.680 03:22:43 -- nvmf/common.sh@510 -- # nvme list 00:09:05.680 03:22:43 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:05.680 03:22:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:05.680 03:22:43 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:05.680 03:22:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:05.680 03:22:43 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:05.680 03:22:43 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.245 03:22:43 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:06.245 03:22:43 -- common/autotest_common.sh@1184 -- # local i=0 00:09:06.245 03:22:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.245 03:22:43 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:06.245 03:22:43 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:06.245 03:22:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:08.768 03:22:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:08.768 03:22:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:08.768 03:22:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.768 03:22:45 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:08.768 03:22:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.768 03:22:45 -- common/autotest_common.sh@1194 -- # return 0 00:09:08.768 03:22:45 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:08.768 03:22:45 -- nvmf/common.sh@511 -- # local dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@510 -- # nvme list 00:09:08.768 03:22:45 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:08.768 03:22:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:08.768 03:22:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:08.768 03:22:45 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:08.768 03:22:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:08.768 03:22:45 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:08.768 03:22:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:45 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:08.768 /dev/nvme0n1 ]] 00:09:08.768 03:22:45 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:08.768 03:22:45 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:08.768 03:22:45 -- nvmf/common.sh@511 -- # local dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:45 -- nvmf/common.sh@510 -- # nvme list 00:09:08.768 03:22:46 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:08.768 03:22:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:46 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:08.768 03:22:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:46 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:08.768 03:22:46 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:08.768 03:22:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:46 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:08.768 03:22:46 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:08.768 03:22:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.768 03:22:46 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:08.768 03:22:46 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.768 03:22:46 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.768 03:22:46 -- common/autotest_common.sh@1205 -- # local i=0 00:09:08.768 03:22:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:08.768 03:22:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.768 03:22:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:08.768 03:22:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.768 03:22:46 -- common/autotest_common.sh@1217 -- # return 0 00:09:08.768 03:22:46 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:08.768 03:22:46 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.768 03:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.768 03:22:46 -- common/autotest_common.sh@10 -- # set +x 00:09:08.768 03:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.768 03:22:46 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:08.768 03:22:46 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:08.768 03:22:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:08.768 03:22:46 -- nvmf/common.sh@117 -- # sync 00:09:08.768 03:22:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.768 03:22:46 -- nvmf/common.sh@120 -- # set +e 00:09:08.768 03:22:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.768 03:22:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.768 rmmod nvme_tcp 00:09:09.027 rmmod nvme_fabrics 00:09:09.027 rmmod nvme_keyring 00:09:09.027 03:22:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.027 03:22:46 -- nvmf/common.sh@124 -- # set -e 00:09:09.027 03:22:46 -- nvmf/common.sh@125 -- # return 0 00:09:09.027 03:22:46 -- nvmf/common.sh@478 -- # '[' -n 196499 ']' 00:09:09.027 03:22:46 -- nvmf/common.sh@479 -- # killprocess 196499 00:09:09.027 03:22:46 -- common/autotest_common.sh@936 -- # '[' -z 196499 ']' 00:09:09.027 03:22:46 -- common/autotest_common.sh@940 -- # kill -0 196499 00:09:09.027 03:22:46 -- common/autotest_common.sh@941 -- # uname 00:09:09.027 03:22:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:09.027 03:22:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 196499 00:09:09.027 03:22:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:09.027 03:22:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:09.027 03:22:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 196499' 00:09:09.027 killing process with pid 196499 00:09:09.027 03:22:46 -- common/autotest_common.sh@955 -- # kill 196499 00:09:09.027 03:22:46 -- common/autotest_common.sh@960 -- # wait 196499 00:09:09.285 03:22:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:09.285 03:22:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:09.285 03:22:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:09.285 03:22:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.285 03:22:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.285 03:22:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.285 03:22:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.285 03:22:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.189 03:22:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.189 00:09:11.189 real 0m8.549s 00:09:11.189 user 0m16.090s 00:09:11.189 sys 0m2.262s 00:09:11.189 03:22:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:11.189 03:22:48 -- common/autotest_common.sh@10 -- # set +x 00:09:11.189 ************************************ 00:09:11.189 END TEST nvmf_nvme_cli 00:09:11.189 ************************************ 00:09:11.448 03:22:48 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:11.448 03:22:48 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:11.448 03:22:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:11.448 03:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.448 03:22:48 -- common/autotest_common.sh@10 -- # set +x 00:09:11.448 ************************************ 00:09:11.448 START TEST nvmf_vfio_user 00:09:11.448 ************************************ 00:09:11.448 03:22:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:11.448 * Looking for test storage... 00:09:11.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.448 03:22:48 -- nvmf/common.sh@7 -- # uname -s 00:09:11.448 03:22:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.448 03:22:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.448 03:22:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.448 03:22:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.448 03:22:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.448 03:22:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.448 03:22:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.448 03:22:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.448 03:22:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.448 03:22:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.448 03:22:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.448 03:22:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.448 03:22:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.448 03:22:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.448 03:22:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.448 03:22:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.448 03:22:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.448 03:22:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.448 03:22:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.448 03:22:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.448 03:22:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.448 03:22:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.448 03:22:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.448 03:22:48 -- paths/export.sh@5 -- # export PATH 00:09:11.448 03:22:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.448 03:22:48 -- nvmf/common.sh@47 -- # : 0 00:09:11.448 03:22:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.448 03:22:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.448 03:22:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.448 03:22:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.448 03:22:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.448 03:22:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.448 03:22:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.448 03:22:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=197427 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 197427' 00:09:11.448 Process pid: 197427 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:11.448 03:22:48 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 197427 00:09:11.448 03:22:48 -- common/autotest_common.sh@817 -- # '[' -z 197427 ']' 00:09:11.448 03:22:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.448 03:22:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:11.448 03:22:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.448 03:22:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:11.448 03:22:48 -- common/autotest_common.sh@10 -- # set +x 00:09:11.448 [2024-04-19 03:22:48.993258] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:09:11.448 [2024-04-19 03:22:48.993335] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.706 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.706 [2024-04-19 03:22:49.059518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.706 [2024-04-19 03:22:49.177377] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.706 [2024-04-19 03:22:49.177446] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.706 [2024-04-19 03:22:49.177471] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.706 [2024-04-19 03:22:49.177482] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.706 [2024-04-19 03:22:49.177492] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.706 [2024-04-19 03:22:49.177542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.706 [2024-04-19 03:22:49.177616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.706 [2024-04-19 03:22:49.177641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.706 [2024-04-19 03:22:49.177644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.636 03:22:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:12.636 03:22:49 -- common/autotest_common.sh@850 -- # return 0 00:09:12.636 03:22:49 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:13.568 03:22:50 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:13.826 03:22:51 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:13.826 03:22:51 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:13.826 03:22:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:13.826 03:22:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:13.826 03:22:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:14.084 Malloc1 00:09:14.084 03:22:51 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:14.342 03:22:51 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:14.599 03:22:51 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:14.856 03:22:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:14.856 03:22:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:14.856 03:22:52 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:15.113 Malloc2 00:09:15.113 03:22:52 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:15.371 03:22:52 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:15.628 03:22:52 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:15.887 03:22:53 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:15.887 03:22:53 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:15.887 03:22:53 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:15.887 03:22:53 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:15.887 03:22:53 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:15.887 03:22:53 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:15.887 [2024-04-19 03:22:53.235625] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:09:15.887 [2024-04-19 03:22:53.235668] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197866 ] 00:09:15.887 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.887 [2024-04-19 03:22:53.268771] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:15.887 [2024-04-19 03:22:53.277793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:15.888 [2024-04-19 03:22:53.277823] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3d3b733000 00:09:15.888 [2024-04-19 03:22:53.278770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.279768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.280774] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.281782] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.282784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.283790] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.284795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.285799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:15.888 [2024-04-19 03:22:53.286807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:15.888 [2024-04-19 03:22:53.286831] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3d3b728000 00:09:15.888 [2024-04-19 03:22:53.287984] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:15.888 [2024-04-19 03:22:53.303269] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:15.888 [2024-04-19 03:22:53.303307] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:15.888 [2024-04-19 03:22:53.307924] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:15.888 [2024-04-19 03:22:53.307976] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:15.888 [2024-04-19 03:22:53.308064] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:15.888 [2024-04-19 03:22:53.308092] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:15.888 [2024-04-19 03:22:53.308108] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:15.888 [2024-04-19 03:22:53.308917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:15.888 [2024-04-19 03:22:53.308936] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:15.888 [2024-04-19 03:22:53.308948] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:15.888 [2024-04-19 03:22:53.309919] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:15.888 [2024-04-19 03:22:53.309936] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:15.888 [2024-04-19 03:22:53.309949] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:15.888 [2024-04-19 03:22:53.310933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:15.888 [2024-04-19 03:22:53.310951] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:15.888 [2024-04-19 03:22:53.311926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:15.888 [2024-04-19 03:22:53.311945] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:15.888 [2024-04-19 03:22:53.311953] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:15.888 [2024-04-19 03:22:53.311964] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:15.888 [2024-04-19 03:22:53.312073] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:15.888 [2024-04-19 03:22:53.312081] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:15.888 [2024-04-19 03:22:53.312089] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:15.888 [2024-04-19 03:22:53.313393] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:15.888 [2024-04-19 03:22:53.313938] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:15.888 [2024-04-19 03:22:53.314944] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:15.888 [2024-04-19 03:22:53.315938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:15.888 [2024-04-19 03:22:53.316044] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:15.888 [2024-04-19 03:22:53.316955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:15.888 [2024-04-19 03:22:53.316972] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:15.888 [2024-04-19 03:22:53.316980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317009] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:15.888 [2024-04-19 03:22:53.317023] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317050] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:15.888 [2024-04-19 03:22:53.317060] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:15.888 [2024-04-19 03:22:53.317078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:15.888 [2024-04-19 03:22:53.317126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:15.888 [2024-04-19 03:22:53.317141] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:15.888 [2024-04-19 03:22:53.317149] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:15.888 [2024-04-19 03:22:53.317156] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:15.888 [2024-04-19 03:22:53.317163] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:15.888 [2024-04-19 03:22:53.317170] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:15.888 [2024-04-19 03:22:53.317178] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:15.888 [2024-04-19 03:22:53.317185] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317198] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:15.888 [2024-04-19 03:22:53.317225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:15.888 [2024-04-19 03:22:53.317245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:15.888 [2024-04-19 03:22:53.317258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:15.888 [2024-04-19 03:22:53.317270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:15.888 [2024-04-19 03:22:53.317281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:15.888 [2024-04-19 03:22:53.317289] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317303] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:15.888 [2024-04-19 03:22:53.317331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:15.888 [2024-04-19 03:22:53.317340] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:15.888 [2024-04-19 03:22:53.317352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:15.888 [2024-04-19 03:22:53.317387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317400] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317481] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317496] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317508] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:15.889 [2024-04-19 03:22:53.317516] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:15.889 [2024-04-19 03:22:53.317526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317559] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:15.889 [2024-04-19 03:22:53.317578] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317591] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317603] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:15.889 [2024-04-19 03:22:53.317611] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:15.889 [2024-04-19 03:22:53.317620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317678] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317692] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317704] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:15.889 [2024-04-19 03:22:53.317712] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:15.889 [2024-04-19 03:22:53.317721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317750] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317760] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317776] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317786] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317794] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317802] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:15.889 [2024-04-19 03:22:53.317809] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:15.889 [2024-04-19 03:22:53.317817] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:15.889 [2024-04-19 03:22:53.317842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.317942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.317959] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:15.889 [2024-04-19 03:22:53.317967] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:15.889 [2024-04-19 03:22:53.317974] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:15.889 [2024-04-19 03:22:53.317979] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:15.889 [2024-04-19 03:22:53.317988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:15.889 [2024-04-19 03:22:53.318000] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:15.889 [2024-04-19 03:22:53.318008] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:15.889 [2024-04-19 03:22:53.318016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.318027] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:15.889 [2024-04-19 03:22:53.318035] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:15.889 [2024-04-19 03:22:53.318043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.318055] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:15.889 [2024-04-19 03:22:53.318063] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:15.889 [2024-04-19 03:22:53.318075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:15.889 [2024-04-19 03:22:53.318087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.318107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.318122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:15.889 [2024-04-19 03:22:53.318133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:15.889 ===================================================== 00:09:15.889 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:15.889 ===================================================== 00:09:15.889 Controller Capabilities/Features 00:09:15.889 ================================ 00:09:15.889 Vendor ID: 4e58 00:09:15.889 Subsystem Vendor ID: 4e58 00:09:15.889 Serial Number: SPDK1 00:09:15.889 Model Number: SPDK bdev Controller 00:09:15.889 Firmware Version: 24.05 00:09:15.889 Recommended Arb Burst: 6 00:09:15.889 IEEE OUI Identifier: 8d 6b 50 00:09:15.889 Multi-path I/O 00:09:15.889 May have multiple subsystem ports: Yes 00:09:15.889 May have multiple controllers: Yes 00:09:15.889 Associated with SR-IOV VF: No 00:09:15.889 Max Data Transfer Size: 131072 00:09:15.889 Max Number of Namespaces: 32 00:09:15.889 Max Number of I/O Queues: 127 00:09:15.889 NVMe Specification Version (VS): 1.3 00:09:15.889 NVMe Specification Version (Identify): 1.3 00:09:15.889 Maximum Queue Entries: 256 00:09:15.889 Contiguous Queues Required: Yes 00:09:15.889 Arbitration Mechanisms Supported 00:09:15.889 Weighted Round Robin: Not Supported 00:09:15.889 Vendor Specific: Not Supported 00:09:15.889 Reset Timeout: 15000 ms 00:09:15.889 Doorbell Stride: 4 bytes 00:09:15.889 NVM Subsystem Reset: Not Supported 00:09:15.889 Command Sets Supported 00:09:15.889 NVM Command Set: Supported 00:09:15.889 Boot Partition: Not Supported 00:09:15.889 Memory Page Size Minimum: 4096 bytes 00:09:15.889 Memory Page Size Maximum: 4096 bytes 00:09:15.889 Persistent Memory Region: Not Supported 00:09:15.889 Optional Asynchronous Events Supported 00:09:15.889 Namespace Attribute Notices: Supported 00:09:15.889 Firmware Activation Notices: Not Supported 00:09:15.889 ANA Change Notices: Not Supported 00:09:15.889 PLE Aggregate Log Change Notices: Not Supported 00:09:15.889 LBA Status Info Alert Notices: Not Supported 00:09:15.889 EGE Aggregate Log Change Notices: Not Supported 00:09:15.889 Normal NVM Subsystem Shutdown event: Not Supported 00:09:15.889 Zone Descriptor Change Notices: Not Supported 00:09:15.889 Discovery Log Change Notices: Not Supported 00:09:15.889 Controller Attributes 00:09:15.889 128-bit Host Identifier: Supported 00:09:15.889 Non-Operational Permissive Mode: Not Supported 00:09:15.889 NVM Sets: Not Supported 00:09:15.889 Read Recovery Levels: Not Supported 00:09:15.889 Endurance Groups: Not Supported 00:09:15.889 Predictable Latency Mode: Not Supported 00:09:15.890 Traffic Based Keep ALive: Not Supported 00:09:15.890 Namespace Granularity: Not Supported 00:09:15.890 SQ Associations: Not Supported 00:09:15.890 UUID List: Not Supported 00:09:15.890 Multi-Domain Subsystem: Not Supported 00:09:15.890 Fixed Capacity Management: Not Supported 00:09:15.890 Variable Capacity Management: Not Supported 00:09:15.890 Delete Endurance Group: Not Supported 00:09:15.890 Delete NVM Set: Not Supported 00:09:15.890 Extended LBA Formats Supported: Not Supported 00:09:15.890 Flexible Data Placement Supported: Not Supported 00:09:15.890 00:09:15.890 Controller Memory Buffer Support 00:09:15.890 ================================ 00:09:15.890 Supported: No 00:09:15.890 00:09:15.890 Persistent Memory Region Support 00:09:15.890 ================================ 00:09:15.890 Supported: No 00:09:15.890 00:09:15.890 Admin Command Set Attributes 00:09:15.890 ============================ 00:09:15.890 Security Send/Receive: Not Supported 00:09:15.890 Format NVM: Not Supported 00:09:15.890 Firmware Activate/Download: Not Supported 00:09:15.890 Namespace Management: Not Supported 00:09:15.890 Device Self-Test: Not Supported 00:09:15.890 Directives: Not Supported 00:09:15.890 NVMe-MI: Not Supported 00:09:15.890 Virtualization Management: Not Supported 00:09:15.890 Doorbell Buffer Config: Not Supported 00:09:15.890 Get LBA Status Capability: Not Supported 00:09:15.890 Command & Feature Lockdown Capability: Not Supported 00:09:15.890 Abort Command Limit: 4 00:09:15.890 Async Event Request Limit: 4 00:09:15.890 Number of Firmware Slots: N/A 00:09:15.890 Firmware Slot 1 Read-Only: N/A 00:09:15.890 Firmware Activation Without Reset: N/A 00:09:15.890 Multiple Update Detection Support: N/A 00:09:15.890 Firmware Update Granularity: No Information Provided 00:09:15.890 Per-Namespace SMART Log: No 00:09:15.890 Asymmetric Namespace Access Log Page: Not Supported 00:09:15.890 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:15.890 Command Effects Log Page: Supported 00:09:15.890 Get Log Page Extended Data: Supported 00:09:15.890 Telemetry Log Pages: Not Supported 00:09:15.890 Persistent Event Log Pages: Not Supported 00:09:15.890 Supported Log Pages Log Page: May Support 00:09:15.890 Commands Supported & Effects Log Page: Not Supported 00:09:15.890 Feature Identifiers & Effects Log Page:May Support 00:09:15.890 NVMe-MI Commands & Effects Log Page: May Support 00:09:15.890 Data Area 4 for Telemetry Log: Not Supported 00:09:15.890 Error Log Page Entries Supported: 128 00:09:15.890 Keep Alive: Supported 00:09:15.890 Keep Alive Granularity: 10000 ms 00:09:15.890 00:09:15.890 NVM Command Set Attributes 00:09:15.890 ========================== 00:09:15.890 Submission Queue Entry Size 00:09:15.890 Max: 64 00:09:15.890 Min: 64 00:09:15.890 Completion Queue Entry Size 00:09:15.890 Max: 16 00:09:15.890 Min: 16 00:09:15.890 Number of Namespaces: 32 00:09:15.890 Compare Command: Supported 00:09:15.890 Write Uncorrectable Command: Not Supported 00:09:15.890 Dataset Management Command: Supported 00:09:15.890 Write Zeroes Command: Supported 00:09:15.890 Set Features Save Field: Not Supported 00:09:15.890 Reservations: Not Supported 00:09:15.890 Timestamp: Not Supported 00:09:15.890 Copy: Supported 00:09:15.890 Volatile Write Cache: Present 00:09:15.890 Atomic Write Unit (Normal): 1 00:09:15.890 Atomic Write Unit (PFail): 1 00:09:15.890 Atomic Compare & Write Unit: 1 00:09:15.890 Fused Compare & Write: Supported 00:09:15.890 Scatter-Gather List 00:09:15.890 SGL Command Set: Supported (Dword aligned) 00:09:15.890 SGL Keyed: Not Supported 00:09:15.890 SGL Bit Bucket Descriptor: Not Supported 00:09:15.890 SGL Metadata Pointer: Not Supported 00:09:15.890 Oversized SGL: Not Supported 00:09:15.890 SGL Metadata Address: Not Supported 00:09:15.890 SGL Offset: Not Supported 00:09:15.890 Transport SGL Data Block: Not Supported 00:09:15.890 Replay Protected Memory Block: Not Supported 00:09:15.890 00:09:15.890 Firmware Slot Information 00:09:15.890 ========================= 00:09:15.890 Active slot: 1 00:09:15.890 Slot 1 Firmware Revision: 24.05 00:09:15.890 00:09:15.890 00:09:15.890 Commands Supported and Effects 00:09:15.890 ============================== 00:09:15.890 Admin Commands 00:09:15.890 -------------- 00:09:15.890 Get Log Page (02h): Supported 00:09:15.890 Identify (06h): Supported 00:09:15.890 Abort (08h): Supported 00:09:15.890 Set Features (09h): Supported 00:09:15.890 Get Features (0Ah): Supported 00:09:15.890 Asynchronous Event Request (0Ch): Supported 00:09:15.890 Keep Alive (18h): Supported 00:09:15.890 I/O Commands 00:09:15.890 ------------ 00:09:15.890 Flush (00h): Supported LBA-Change 00:09:15.890 Write (01h): Supported LBA-Change 00:09:15.890 Read (02h): Supported 00:09:15.890 Compare (05h): Supported 00:09:15.890 Write Zeroes (08h): Supported LBA-Change 00:09:15.890 Dataset Management (09h): Supported LBA-Change 00:09:15.890 Copy (19h): Supported LBA-Change 00:09:15.890 Unknown (79h): Supported LBA-Change 00:09:15.890 Unknown (7Ah): Supported 00:09:15.890 00:09:15.890 Error Log 00:09:15.890 ========= 00:09:15.890 00:09:15.890 Arbitration 00:09:15.890 =========== 00:09:15.890 Arbitration Burst: 1 00:09:15.890 00:09:15.890 Power Management 00:09:15.890 ================ 00:09:15.890 Number of Power States: 1 00:09:15.890 Current Power State: Power State #0 00:09:15.890 Power State #0: 00:09:15.890 Max Power: 0.00 W 00:09:15.890 Non-Operational State: Operational 00:09:15.890 Entry Latency: Not Reported 00:09:15.890 Exit Latency: Not Reported 00:09:15.890 Relative Read Throughput: 0 00:09:15.890 Relative Read Latency: 0 00:09:15.890 Relative Write Throughput: 0 00:09:15.890 Relative Write Latency: 0 00:09:15.890 Idle Power: Not Reported 00:09:15.890 Active Power: Not Reported 00:09:15.890 Non-Operational Permissive Mode: Not Supported 00:09:15.890 00:09:15.890 Health Information 00:09:15.890 ================== 00:09:15.890 Critical Warnings: 00:09:15.890 Available Spare Space: OK 00:09:15.890 Temperature: OK 00:09:15.890 Device Reliability: OK 00:09:15.890 Read Only: No 00:09:15.890 Volatile Memory Backup: OK 00:09:15.890 Current Temperature: 0 Kelvin (-2[2024-04-19 03:22:53.318260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:15.890 [2024-04-19 03:22:53.318276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:15.890 [2024-04-19 03:22:53.318313] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:15.890 [2024-04-19 03:22:53.318329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.890 [2024-04-19 03:22:53.318340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.890 [2024-04-19 03:22:53.318349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.890 [2024-04-19 03:22:53.318358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.890 [2024-04-19 03:22:53.322393] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:15.890 [2024-04-19 03:22:53.322415] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:15.890 [2024-04-19 03:22:53.322983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:15.890 [2024-04-19 03:22:53.323066] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:15.890 [2024-04-19 03:22:53.323080] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:15.890 [2024-04-19 03:22:53.323992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:15.890 [2024-04-19 03:22:53.324014] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:15.890 [2024-04-19 03:22:53.324068] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:15.890 [2024-04-19 03:22:53.326034] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:15.890 73 Celsius) 00:09:15.890 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:15.890 Available Spare: 0% 00:09:15.890 Available Spare Threshold: 0% 00:09:15.890 Life Percentage Used: 0% 00:09:15.890 Data Units Read: 0 00:09:15.890 Data Units Written: 0 00:09:15.891 Host Read Commands: 0 00:09:15.891 Host Write Commands: 0 00:09:15.891 Controller Busy Time: 0 minutes 00:09:15.891 Power Cycles: 0 00:09:15.891 Power On Hours: 0 hours 00:09:15.891 Unsafe Shutdowns: 0 00:09:15.891 Unrecoverable Media Errors: 0 00:09:15.891 Lifetime Error Log Entries: 0 00:09:15.891 Warning Temperature Time: 0 minutes 00:09:15.891 Critical Temperature Time: 0 minutes 00:09:15.891 00:09:15.891 Number of Queues 00:09:15.891 ================ 00:09:15.891 Number of I/O Submission Queues: 127 00:09:15.891 Number of I/O Completion Queues: 127 00:09:15.891 00:09:15.891 Active Namespaces 00:09:15.891 ================= 00:09:15.891 Namespace ID:1 00:09:15.891 Error Recovery Timeout: Unlimited 00:09:15.891 Command Set Identifier: NVM (00h) 00:09:15.891 Deallocate: Supported 00:09:15.891 Deallocated/Unwritten Error: Not Supported 00:09:15.891 Deallocated Read Value: Unknown 00:09:15.891 Deallocate in Write Zeroes: Not Supported 00:09:15.891 Deallocated Guard Field: 0xFFFF 00:09:15.891 Flush: Supported 00:09:15.891 Reservation: Supported 00:09:15.891 Namespace Sharing Capabilities: Multiple Controllers 00:09:15.891 Size (in LBAs): 131072 (0GiB) 00:09:15.891 Capacity (in LBAs): 131072 (0GiB) 00:09:15.891 Utilization (in LBAs): 131072 (0GiB) 00:09:15.891 NGUID: F77243D02190498396ADAD75931B4A24 00:09:15.891 UUID: f77243d0-2190-4983-96ad-ad75931b4a24 00:09:15.891 Thin Provisioning: Not Supported 00:09:15.891 Per-NS Atomic Units: Yes 00:09:15.891 Atomic Boundary Size (Normal): 0 00:09:15.891 Atomic Boundary Size (PFail): 0 00:09:15.891 Atomic Boundary Offset: 0 00:09:15.891 Maximum Single Source Range Length: 65535 00:09:15.891 Maximum Copy Length: 65535 00:09:15.891 Maximum Source Range Count: 1 00:09:15.891 NGUID/EUI64 Never Reused: No 00:09:15.891 Namespace Write Protected: No 00:09:15.891 Number of LBA Formats: 1 00:09:15.891 Current LBA Format: LBA Format #00 00:09:15.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.891 00:09:15.891 03:22:53 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:15.891 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.148 [2024-04-19 03:22:53.556234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:21.407 [2024-04-19 03:22:58.579903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:21.407 Initializing NVMe Controllers 00:09:21.407 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:21.408 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:21.408 Initialization complete. Launching workers. 00:09:21.408 ======================================================== 00:09:21.408 Latency(us) 00:09:21.408 Device Information : IOPS MiB/s Average min max 00:09:21.408 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33782.95 131.96 3788.32 1206.73 7767.05 00:09:21.408 ======================================================== 00:09:21.408 Total : 33782.95 131.96 3788.32 1206.73 7767.05 00:09:21.408 00:09:21.408 03:22:58 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:21.408 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.408 [2024-04-19 03:22:58.821993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:26.741 [2024-04-19 03:23:03.858051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:26.741 Initializing NVMe Controllers 00:09:26.741 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:26.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:26.742 Initialization complete. Launching workers. 00:09:26.742 ======================================================== 00:09:26.742 Latency(us) 00:09:26.742 Device Information : IOPS MiB/s Average min max 00:09:26.742 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.86 6978.70 11015.07 00:09:26.742 ======================================================== 00:09:26.742 Total : 16051.20 62.70 7982.86 6978.70 11015.07 00:09:26.742 00:09:26.742 03:23:03 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:26.742 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.742 [2024-04-19 03:23:04.071031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:32.010 [2024-04-19 03:23:09.138724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:32.010 Initializing NVMe Controllers 00:09:32.010 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:32.010 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:32.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:32.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:32.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:32.010 Initialization complete. Launching workers. 00:09:32.010 Starting thread on core 2 00:09:32.010 Starting thread on core 3 00:09:32.010 Starting thread on core 1 00:09:32.010 03:23:09 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:32.011 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.011 [2024-04-19 03:23:09.444002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:35.321 [2024-04-19 03:23:12.514432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:35.321 Initializing NVMe Controllers 00:09:35.321 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:35.321 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:35.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:35.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:35.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:35.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:35.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:35.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:35.322 Initialization complete. Launching workers. 00:09:35.322 Starting thread on core 1 with urgent priority queue 00:09:35.322 Starting thread on core 2 with urgent priority queue 00:09:35.322 Starting thread on core 3 with urgent priority queue 00:09:35.322 Starting thread on core 0 with urgent priority queue 00:09:35.322 SPDK bdev Controller (SPDK1 ) core 0: 5357.67 IO/s 18.66 secs/100000 ios 00:09:35.322 SPDK bdev Controller (SPDK1 ) core 1: 5064.00 IO/s 19.75 secs/100000 ios 00:09:35.322 SPDK bdev Controller (SPDK1 ) core 2: 5525.00 IO/s 18.10 secs/100000 ios 00:09:35.322 SPDK bdev Controller (SPDK1 ) core 3: 5221.67 IO/s 19.15 secs/100000 ios 00:09:35.322 ======================================================== 00:09:35.322 00:09:35.322 03:23:12 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:35.322 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.322 [2024-04-19 03:23:12.814911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:35.322 [2024-04-19 03:23:12.846375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:35.589 Initializing NVMe Controllers 00:09:35.589 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:35.589 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:35.589 Namespace ID: 1 size: 0GB 00:09:35.589 Initialization complete. 00:09:35.589 INFO: using host memory buffer for IO 00:09:35.589 Hello world! 00:09:35.589 03:23:12 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:35.589 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.589 [2024-04-19 03:23:13.138829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:36.960 Initializing NVMe Controllers 00:09:36.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:36.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:36.960 Initialization complete. Launching workers. 00:09:36.960 submit (in ns) avg, min, max = 7845.1, 3490.0, 4018043.3 00:09:36.960 complete (in ns) avg, min, max = 27874.7, 2047.8, 5993904.4 00:09:36.960 00:09:36.960 Submit histogram 00:09:36.960 ================ 00:09:36.960 Range in us Cumulative Count 00:09:36.960 3.484 - 3.508: 0.2164% ( 29) 00:09:36.960 3.508 - 3.532: 0.9625% ( 100) 00:09:36.960 3.532 - 3.556: 3.0665% ( 282) 00:09:36.960 3.556 - 3.579: 7.1103% ( 542) 00:09:36.960 3.579 - 3.603: 15.1533% ( 1078) 00:09:36.960 3.603 - 3.627: 22.9575% ( 1046) 00:09:36.960 3.627 - 3.650: 32.3286% ( 1256) 00:09:36.960 3.650 - 3.674: 39.9463% ( 1021) 00:09:36.960 3.674 - 3.698: 47.9818% ( 1077) 00:09:36.960 3.698 - 3.721: 54.4132% ( 862) 00:09:36.960 3.721 - 3.745: 59.0689% ( 624) 00:09:36.960 3.745 - 3.769: 62.6054% ( 474) 00:09:36.960 3.769 - 3.793: 65.7539% ( 422) 00:09:36.960 3.793 - 3.816: 69.0144% ( 437) 00:09:36.960 3.816 - 3.840: 72.7300% ( 498) 00:09:36.960 3.840 - 3.864: 76.9828% ( 570) 00:09:36.960 3.864 - 3.887: 80.4223% ( 461) 00:09:36.960 3.887 - 3.911: 83.5485% ( 419) 00:09:36.960 3.911 - 3.935: 86.2270% ( 359) 00:09:36.960 3.935 - 3.959: 88.2340% ( 269) 00:09:36.960 3.959 - 3.982: 89.7859% ( 208) 00:09:36.960 3.982 - 4.006: 91.0244% ( 166) 00:09:36.960 4.006 - 4.030: 92.0839% ( 142) 00:09:36.960 4.030 - 4.053: 92.8598% ( 104) 00:09:36.960 4.053 - 4.077: 93.6656% ( 108) 00:09:36.960 4.077 - 4.101: 94.4938% ( 111) 00:09:36.960 4.101 - 4.124: 95.0533% ( 75) 00:09:36.960 4.124 - 4.148: 95.5159% ( 62) 00:09:36.960 4.148 - 4.172: 95.8442% ( 44) 00:09:36.960 4.172 - 4.196: 96.0755% ( 31) 00:09:36.960 4.196 - 4.219: 96.2770% ( 27) 00:09:36.960 4.219 - 4.243: 96.4262% ( 20) 00:09:36.960 4.243 - 4.267: 96.5754% ( 20) 00:09:36.960 4.267 - 4.290: 96.6649% ( 12) 00:09:36.960 4.290 - 4.314: 96.7843% ( 16) 00:09:36.960 4.314 - 4.338: 96.8813% ( 13) 00:09:36.960 4.338 - 4.361: 96.9783% ( 13) 00:09:36.960 4.361 - 4.385: 97.0604% ( 11) 00:09:36.960 4.385 - 4.409: 97.1200% ( 8) 00:09:36.960 4.409 - 4.433: 97.1499% ( 4) 00:09:36.960 4.433 - 4.456: 97.1797% ( 4) 00:09:36.960 4.456 - 4.480: 97.1947% ( 2) 00:09:36.960 4.480 - 4.504: 97.2021% ( 1) 00:09:36.960 4.527 - 4.551: 97.2394% ( 5) 00:09:36.960 4.551 - 4.575: 97.2767% ( 5) 00:09:36.960 4.575 - 4.599: 97.2991% ( 3) 00:09:36.960 4.599 - 4.622: 97.3140% ( 2) 00:09:36.960 4.622 - 4.646: 97.3588% ( 6) 00:09:36.960 4.646 - 4.670: 97.3812% ( 3) 00:09:36.960 4.670 - 4.693: 97.4185% ( 5) 00:09:36.960 4.693 - 4.717: 97.4707% ( 7) 00:09:36.960 4.717 - 4.741: 97.5229% ( 7) 00:09:36.960 4.741 - 4.764: 97.5453% ( 3) 00:09:36.960 4.764 - 4.788: 97.5752% ( 4) 00:09:36.960 4.788 - 4.812: 97.6050% ( 4) 00:09:36.960 4.812 - 4.836: 97.6349% ( 4) 00:09:36.960 4.836 - 4.859: 97.6722% ( 5) 00:09:36.960 4.859 - 4.883: 97.7020% ( 4) 00:09:36.960 4.883 - 4.907: 97.7244% ( 3) 00:09:36.960 4.907 - 4.930: 97.7468% ( 3) 00:09:36.960 4.930 - 4.954: 97.7542% ( 1) 00:09:36.960 4.954 - 4.978: 97.7617% ( 1) 00:09:36.960 4.978 - 5.001: 97.7915% ( 4) 00:09:36.960 5.001 - 5.025: 97.8139% ( 3) 00:09:36.960 5.025 - 5.049: 97.8214% ( 1) 00:09:36.960 5.049 - 5.073: 97.8363% ( 2) 00:09:36.960 5.073 - 5.096: 97.8587% ( 3) 00:09:36.960 5.096 - 5.120: 97.8661% ( 1) 00:09:36.960 5.120 - 5.144: 97.8736% ( 1) 00:09:36.960 5.144 - 5.167: 97.8811% ( 1) 00:09:36.960 5.167 - 5.191: 97.8885% ( 1) 00:09:36.960 5.215 - 5.239: 97.9035% ( 2) 00:09:36.960 5.239 - 5.262: 97.9109% ( 1) 00:09:36.960 5.333 - 5.357: 97.9184% ( 1) 00:09:36.960 5.357 - 5.381: 97.9258% ( 1) 00:09:36.960 5.381 - 5.404: 97.9333% ( 1) 00:09:36.960 5.404 - 5.428: 97.9408% ( 1) 00:09:36.960 5.428 - 5.452: 97.9482% ( 1) 00:09:36.960 5.476 - 5.499: 97.9631% ( 2) 00:09:36.960 5.831 - 5.855: 97.9706% ( 1) 00:09:36.960 5.902 - 5.926: 97.9781% ( 1) 00:09:36.960 5.926 - 5.950: 97.9855% ( 1) 00:09:36.960 6.044 - 6.068: 97.9930% ( 1) 00:09:36.960 6.258 - 6.305: 98.0004% ( 1) 00:09:36.960 6.447 - 6.495: 98.0228% ( 3) 00:09:36.960 6.495 - 6.542: 98.0303% ( 1) 00:09:36.960 6.542 - 6.590: 98.0378% ( 1) 00:09:36.960 6.590 - 6.637: 98.0452% ( 1) 00:09:36.960 6.827 - 6.874: 98.0601% ( 2) 00:09:36.960 6.921 - 6.969: 98.0676% ( 1) 00:09:36.960 6.969 - 7.016: 98.0900% ( 3) 00:09:36.960 7.016 - 7.064: 98.0974% ( 1) 00:09:36.960 7.111 - 7.159: 98.1198% ( 3) 00:09:36.960 7.206 - 7.253: 98.1347% ( 2) 00:09:36.960 7.253 - 7.301: 98.1422% ( 1) 00:09:36.960 7.301 - 7.348: 98.1571% ( 2) 00:09:36.960 7.348 - 7.396: 98.1721% ( 2) 00:09:36.960 7.396 - 7.443: 98.1944% ( 3) 00:09:36.960 7.443 - 7.490: 98.2094% ( 2) 00:09:36.960 7.538 - 7.585: 98.2243% ( 2) 00:09:36.960 7.585 - 7.633: 98.2392% ( 2) 00:09:36.960 7.633 - 7.680: 98.2541% ( 2) 00:09:36.960 7.680 - 7.727: 98.2690% ( 2) 00:09:36.960 7.822 - 7.870: 98.2765% ( 1) 00:09:36.960 7.917 - 7.964: 98.3063% ( 4) 00:09:36.960 7.964 - 8.012: 98.3138% ( 1) 00:09:36.960 8.012 - 8.059: 98.3287% ( 2) 00:09:36.960 8.107 - 8.154: 98.3511% ( 3) 00:09:36.960 8.154 - 8.201: 98.3735% ( 3) 00:09:36.960 8.344 - 8.391: 98.3884% ( 2) 00:09:36.960 8.391 - 8.439: 98.3959% ( 1) 00:09:36.960 8.533 - 8.581: 98.4108% ( 2) 00:09:36.960 8.723 - 8.770: 98.4183% ( 1) 00:09:36.961 8.770 - 8.818: 98.4257% ( 1) 00:09:36.961 8.865 - 8.913: 98.4332% ( 1) 00:09:36.961 9.007 - 9.055: 98.4406% ( 1) 00:09:36.961 9.102 - 9.150: 98.4481% ( 1) 00:09:36.961 9.339 - 9.387: 98.4556% ( 1) 00:09:36.961 9.387 - 9.434: 98.4630% ( 1) 00:09:36.961 9.481 - 9.529: 98.4705% ( 1) 00:09:36.961 9.576 - 9.624: 98.4780% ( 1) 00:09:36.961 9.719 - 9.766: 98.4854% ( 1) 00:09:36.961 10.050 - 10.098: 98.5078% ( 3) 00:09:36.961 10.193 - 10.240: 98.5302% ( 3) 00:09:36.961 10.240 - 10.287: 98.5376% ( 1) 00:09:36.961 10.430 - 10.477: 98.5451% ( 1) 00:09:36.961 10.477 - 10.524: 98.5526% ( 1) 00:09:36.961 10.809 - 10.856: 98.5600% ( 1) 00:09:36.961 10.904 - 10.951: 98.5749% ( 2) 00:09:36.961 10.951 - 10.999: 98.5824% ( 1) 00:09:36.961 11.283 - 11.330: 98.5899% ( 1) 00:09:36.961 11.378 - 11.425: 98.5973% ( 1) 00:09:36.961 11.520 - 11.567: 98.6048% ( 1) 00:09:36.961 11.710 - 11.757: 98.6123% ( 1) 00:09:36.961 12.136 - 12.231: 98.6272% ( 2) 00:09:36.961 12.231 - 12.326: 98.6421% ( 2) 00:09:36.961 12.326 - 12.421: 98.6496% ( 1) 00:09:36.961 12.516 - 12.610: 98.6645% ( 2) 00:09:36.961 12.705 - 12.800: 98.6719% ( 1) 00:09:36.961 13.084 - 13.179: 98.6794% ( 1) 00:09:36.961 13.179 - 13.274: 98.6869% ( 1) 00:09:36.961 13.274 - 13.369: 98.6943% ( 1) 00:09:36.961 13.369 - 13.464: 98.7092% ( 2) 00:09:36.961 13.464 - 13.559: 98.7167% ( 1) 00:09:36.961 13.653 - 13.748: 98.7242% ( 1) 00:09:36.961 13.748 - 13.843: 98.7316% ( 1) 00:09:36.961 13.843 - 13.938: 98.7391% ( 1) 00:09:36.961 13.938 - 14.033: 98.7465% ( 1) 00:09:36.961 14.033 - 14.127: 98.7540% ( 1) 00:09:36.961 14.127 - 14.222: 98.7615% ( 1) 00:09:36.961 14.222 - 14.317: 98.7689% ( 1) 00:09:36.961 14.412 - 14.507: 98.7764% ( 1) 00:09:36.961 14.696 - 14.791: 98.7839% ( 1) 00:09:36.961 14.886 - 14.981: 98.7913% ( 1) 00:09:36.961 15.076 - 15.170: 98.8062% ( 2) 00:09:36.961 16.877 - 16.972: 98.8137% ( 1) 00:09:36.961 17.067 - 17.161: 98.8212% ( 1) 00:09:36.961 17.161 - 17.256: 98.8286% ( 1) 00:09:36.961 17.256 - 17.351: 98.8361% ( 1) 00:09:36.961 17.351 - 17.446: 98.8435% ( 1) 00:09:36.961 17.446 - 17.541: 98.8808% ( 5) 00:09:36.961 17.541 - 17.636: 98.9107% ( 4) 00:09:36.961 17.636 - 17.730: 98.9182% ( 1) 00:09:36.961 17.730 - 17.825: 98.9405% ( 3) 00:09:36.961 17.825 - 17.920: 99.0002% ( 8) 00:09:36.961 17.920 - 18.015: 99.0898% ( 12) 00:09:36.961 18.015 - 18.110: 99.1569% ( 9) 00:09:36.961 18.110 - 18.204: 99.2241% ( 9) 00:09:36.961 18.204 - 18.299: 99.2688% ( 6) 00:09:36.961 18.299 - 18.394: 99.3360% ( 9) 00:09:36.961 18.394 - 18.489: 99.4180% ( 11) 00:09:36.961 18.489 - 18.584: 99.4777% ( 8) 00:09:36.961 18.584 - 18.679: 99.5150% ( 5) 00:09:36.961 18.679 - 18.773: 99.5673% ( 7) 00:09:36.961 18.773 - 18.868: 99.6195% ( 7) 00:09:36.961 18.868 - 18.963: 99.6941% ( 10) 00:09:36.961 18.963 - 19.058: 99.7314% ( 5) 00:09:36.961 19.058 - 19.153: 99.7463% ( 2) 00:09:36.961 19.153 - 19.247: 99.7538% ( 1) 00:09:36.961 19.247 - 19.342: 99.7687% ( 2) 00:09:36.961 19.342 - 19.437: 99.7911% ( 3) 00:09:36.961 19.437 - 19.532: 99.8060% ( 2) 00:09:36.961 19.532 - 19.627: 99.8135% ( 1) 00:09:36.961 19.627 - 19.721: 99.8209% ( 1) 00:09:36.961 19.721 - 19.816: 99.8284% ( 1) 00:09:36.961 19.911 - 20.006: 99.8359% ( 1) 00:09:36.961 20.101 - 20.196: 99.8433% ( 1) 00:09:36.961 21.713 - 21.807: 99.8508% ( 1) 00:09:36.961 21.807 - 21.902: 99.8582% ( 1) 00:09:36.961 22.187 - 22.281: 99.8657% ( 1) 00:09:36.961 24.273 - 24.462: 99.8732% ( 1) 00:09:36.961 25.410 - 25.600: 99.8806% ( 1) 00:09:36.961 27.117 - 27.307: 99.8881% ( 1) 00:09:36.961 27.496 - 27.686: 99.8955% ( 1) 00:09:36.961 27.876 - 28.065: 99.9030% ( 1) 00:09:36.961 3980.705 - 4004.978: 99.9776% ( 10) 00:09:36.961 4004.978 - 4029.250: 100.0000% ( 3) 00:09:36.961 00:09:36.961 Complete histogram 00:09:36.961 ================== 00:09:36.961 Range in us Cumulative Count 00:09:36.961 2.039 - 2.050: 0.0821% ( 11) 00:09:36.961 2.050 - 2.062: 7.4088% ( 982) 00:09:36.961 2.062 - 2.074: 13.4970% ( 816) 00:09:36.961 2.074 - 2.086: 18.8241% ( 714) 00:09:36.961 2.086 - 2.098: 49.5859% ( 4123) 00:09:36.961 2.098 - 2.110: 59.4345% ( 1320) 00:09:36.961 2.110 - 2.121: 62.7173% ( 440) 00:09:36.961 2.121 - 2.133: 66.7388% ( 539) 00:09:36.961 2.133 - 2.145: 67.7460% ( 135) 00:09:36.961 2.145 - 2.157: 70.9692% ( 432) 00:09:36.961 2.157 - 2.169: 80.0418% ( 1216) 00:09:36.961 2.169 - 2.181: 82.0264% ( 266) 00:09:36.961 2.181 - 2.193: 83.1381% ( 149) 00:09:36.961 2.193 - 2.204: 84.9437% ( 242) 00:09:36.961 2.204 - 2.216: 85.6898% ( 100) 00:09:36.961 2.216 - 2.228: 87.6595% ( 264) 00:09:36.961 2.228 - 2.240: 92.1510% ( 602) 00:09:36.961 2.240 - 2.252: 93.5462% ( 187) 00:09:36.961 2.252 - 2.264: 94.0909% ( 73) 00:09:36.961 2.264 - 2.276: 94.6057% ( 69) 00:09:36.961 2.276 - 2.287: 94.8668% ( 35) 00:09:36.961 2.287 - 2.299: 95.0533% ( 25) 00:09:36.961 2.299 - 2.311: 95.3891% ( 45) 00:09:36.961 2.311 - 2.323: 95.6129% ( 30) 00:09:36.961 2.323 - 2.335: 95.7994% ( 25) 00:09:36.961 2.335 - 2.347: 96.0307% ( 31) 00:09:36.961 2.347 - 2.359: 96.2620% ( 31) 00:09:36.961 2.359 - 2.370: 96.6948% ( 58) 00:09:36.961 2.370 - 2.382: 97.0977% ( 54) 00:09:36.961 2.382 - 2.394: 97.4856% ( 52) 00:09:36.961 2.394 - 2.406: 97.8214% ( 45) 00:09:36.961 2.406 - 2.418: 97.9930% ( 23) 00:09:36.961 2.418 - 2.430: 98.1273% ( 18) 00:09:36.961 2.430 - 2.441: 98.1571% ( 4) 00:09:36.961 2.441 - 2.453: 98.2392% ( 11) 00:09:36.961 2.453 - 2.465: 98.3063% ( 9) 00:09:36.961 2.465 - 2.477: 98.3511% ( 6) 00:09:36.961 2.477 - 2.489: 98.4183% ( 9) 00:09:36.961 2.489 - 2.501: 98.4406% ( 3) 00:09:36.961 2.501 - 2.513: 98.4556% ( 2) 00:09:36.961 2.513 - 2.524: 98.4630% ( 1) 00:09:36.961 2.524 - 2.536: 98.4780% ( 2) 00:09:36.961 2.536 - 2.548: 98.4854% ( 1) 00:09:36.961 2.596 - 2.607: 98.4929% ( 1) 00:09:36.961 2.643 - 2.655: 98.5003% ( 1) 00:09:36.961 2.679 - 2.690: 98.5078% ( 1) 00:09:36.961 2.702 - 2.714: 98.5153% ( 1) 00:09:36.961 2.714 - 2.726: 98.5227% ( 1) 00:09:36.961 3.058 - 3.081: 98.5302% ( 1) 00:09:36.961 3.081 - 3.105: 98.5376% ( 1) 00:09:36.961 3.105 - 3.129: 98.5451% ( 1) 00:09:36.961 3.129 - 3.153: 98.5526% ( 1) 00:09:36.961 3.224 - 3.247: 98.5600% ( 1) 00:09:36.961 3.271 - 3.295: 98.5675% ( 1) 00:09:36.962 3.295 - 3.319: 9[2024-04-19 03:23:14.161014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:36.962 8.5749% ( 1) 00:09:36.962 3.342 - 3.366: 98.5824% ( 1) 00:09:36.962 3.437 - 3.461: 98.5973% ( 2) 00:09:36.962 3.461 - 3.484: 98.6048% ( 1) 00:09:36.962 3.484 - 3.508: 98.6123% ( 1) 00:09:36.962 3.556 - 3.579: 98.6197% ( 1) 00:09:36.962 3.674 - 3.698: 98.6272% ( 1) 00:09:36.962 3.698 - 3.721: 98.6346% ( 1) 00:09:36.962 3.793 - 3.816: 98.6421% ( 1) 00:09:36.962 4.978 - 5.001: 98.6496% ( 1) 00:09:36.962 5.025 - 5.049: 98.6570% ( 1) 00:09:36.962 5.310 - 5.333: 98.6645% ( 1) 00:09:36.962 5.523 - 5.547: 98.6719% ( 1) 00:09:36.962 5.594 - 5.618: 98.6794% ( 1) 00:09:36.962 5.618 - 5.641: 98.6869% ( 1) 00:09:36.962 5.713 - 5.736: 98.6943% ( 1) 00:09:36.962 5.760 - 5.784: 98.7018% ( 1) 00:09:36.962 5.807 - 5.831: 98.7092% ( 1) 00:09:36.962 5.855 - 5.879: 98.7167% ( 1) 00:09:36.962 5.902 - 5.926: 98.7242% ( 1) 00:09:36.962 6.305 - 6.353: 98.7316% ( 1) 00:09:36.962 6.921 - 6.969: 98.7391% ( 1) 00:09:36.962 7.016 - 7.064: 98.7465% ( 1) 00:09:36.962 7.206 - 7.253: 98.7540% ( 1) 00:09:36.962 7.253 - 7.301: 98.7615% ( 1) 00:09:36.962 7.538 - 7.585: 98.7689% ( 1) 00:09:36.962 7.775 - 7.822: 98.7764% ( 1) 00:09:36.962 10.904 - 10.951: 98.7839% ( 1) 00:09:36.962 15.550 - 15.644: 98.7988% ( 2) 00:09:36.962 15.739 - 15.834: 98.8212% ( 3) 00:09:36.962 15.834 - 15.929: 98.8286% ( 1) 00:09:36.962 15.929 - 16.024: 98.8510% ( 3) 00:09:36.962 16.024 - 16.119: 98.8883% ( 5) 00:09:36.962 16.119 - 16.213: 98.9331% ( 6) 00:09:36.962 16.213 - 16.308: 98.9405% ( 1) 00:09:36.962 16.308 - 16.403: 98.9629% ( 3) 00:09:36.962 16.403 - 16.498: 99.0151% ( 7) 00:09:36.962 16.498 - 16.593: 99.1121% ( 13) 00:09:36.962 16.593 - 16.687: 99.1569% ( 6) 00:09:36.962 16.687 - 16.782: 99.1718% ( 2) 00:09:36.962 16.782 - 16.877: 99.1942% ( 3) 00:09:36.962 16.877 - 16.972: 99.2166% ( 3) 00:09:36.962 16.972 - 17.067: 99.2390% ( 3) 00:09:36.962 17.067 - 17.161: 99.2539% ( 2) 00:09:36.962 17.161 - 17.256: 99.2688% ( 2) 00:09:36.962 17.256 - 17.351: 99.2837% ( 2) 00:09:36.962 17.446 - 17.541: 99.2987% ( 2) 00:09:36.962 17.541 - 17.636: 99.3061% ( 1) 00:09:36.962 17.730 - 17.825: 99.3285% ( 3) 00:09:36.962 17.825 - 17.920: 99.3360% ( 1) 00:09:36.962 20.954 - 21.049: 99.3434% ( 1) 00:09:36.962 26.359 - 26.548: 99.3509% ( 1) 00:09:36.962 1820.444 - 1832.581: 99.3584% ( 1) 00:09:36.962 2014.625 - 2026.761: 99.3658% ( 1) 00:09:36.962 2026.761 - 2038.898: 99.3807% ( 2) 00:09:36.962 2038.898 - 2051.034: 99.3882% ( 1) 00:09:36.962 3980.705 - 4004.978: 99.8657% ( 64) 00:09:36.962 4004.978 - 4029.250: 99.9776% ( 15) 00:09:36.962 5971.058 - 5995.330: 100.0000% ( 3) 00:09:36.962 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:36.962 [2024-04-19 03:23:14.420078] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:36.962 [ 00:09:36.962 { 00:09:36.962 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:36.962 "subtype": "Discovery", 00:09:36.962 "listen_addresses": [], 00:09:36.962 "allow_any_host": true, 00:09:36.962 "hosts": [] 00:09:36.962 }, 00:09:36.962 { 00:09:36.962 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:36.962 "subtype": "NVMe", 00:09:36.962 "listen_addresses": [ 00:09:36.962 { 00:09:36.962 "transport": "VFIOUSER", 00:09:36.962 "trtype": "VFIOUSER", 00:09:36.962 "adrfam": "IPv4", 00:09:36.962 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:36.962 "trsvcid": "0" 00:09:36.962 } 00:09:36.962 ], 00:09:36.962 "allow_any_host": true, 00:09:36.962 "hosts": [], 00:09:36.962 "serial_number": "SPDK1", 00:09:36.962 "model_number": "SPDK bdev Controller", 00:09:36.962 "max_namespaces": 32, 00:09:36.962 "min_cntlid": 1, 00:09:36.962 "max_cntlid": 65519, 00:09:36.962 "namespaces": [ 00:09:36.962 { 00:09:36.962 "nsid": 1, 00:09:36.962 "bdev_name": "Malloc1", 00:09:36.962 "name": "Malloc1", 00:09:36.962 "nguid": "F77243D02190498396ADAD75931B4A24", 00:09:36.962 "uuid": "f77243d0-2190-4983-96ad-ad75931b4a24" 00:09:36.962 } 00:09:36.962 ] 00:09:36.962 }, 00:09:36.962 { 00:09:36.962 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:36.962 "subtype": "NVMe", 00:09:36.962 "listen_addresses": [ 00:09:36.962 { 00:09:36.962 "transport": "VFIOUSER", 00:09:36.962 "trtype": "VFIOUSER", 00:09:36.962 "adrfam": "IPv4", 00:09:36.962 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:36.962 "trsvcid": "0" 00:09:36.962 } 00:09:36.962 ], 00:09:36.962 "allow_any_host": true, 00:09:36.962 "hosts": [], 00:09:36.962 "serial_number": "SPDK2", 00:09:36.962 "model_number": "SPDK bdev Controller", 00:09:36.962 "max_namespaces": 32, 00:09:36.962 "min_cntlid": 1, 00:09:36.962 "max_cntlid": 65519, 00:09:36.962 "namespaces": [ 00:09:36.962 { 00:09:36.962 "nsid": 1, 00:09:36.962 "bdev_name": "Malloc2", 00:09:36.962 "name": "Malloc2", 00:09:36.962 "nguid": "518B087EFC58488C93DAAAD29AC99036", 00:09:36.962 "uuid": "518b087e-fc58-488c-93da-aad29ac99036" 00:09:36.962 } 00:09:36.962 ] 00:09:36.962 } 00:09:36.962 ] 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@34 -- # aerpid=200386 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:36.962 03:23:14 -- common/autotest_common.sh@1251 -- # local i=0 00:09:36.962 03:23:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:36.962 03:23:14 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:36.962 03:23:14 -- common/autotest_common.sh@1262 -- # return 0 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:36.962 03:23:14 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:36.962 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.221 [2024-04-19 03:23:14.595821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:37.221 Malloc3 00:09:37.221 03:23:14 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:37.479 [2024-04-19 03:23:14.967486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:37.479 03:23:14 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:37.479 Asynchronous Event Request test 00:09:37.479 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:37.479 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:37.479 Registering asynchronous event callbacks... 00:09:37.479 Starting namespace attribute notice tests for all controllers... 00:09:37.479 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:37.479 aer_cb - Changed Namespace 00:09:37.479 Cleaning up... 00:09:37.739 [ 00:09:37.739 { 00:09:37.739 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:37.739 "subtype": "Discovery", 00:09:37.739 "listen_addresses": [], 00:09:37.739 "allow_any_host": true, 00:09:37.739 "hosts": [] 00:09:37.739 }, 00:09:37.739 { 00:09:37.739 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:37.739 "subtype": "NVMe", 00:09:37.739 "listen_addresses": [ 00:09:37.739 { 00:09:37.739 "transport": "VFIOUSER", 00:09:37.739 "trtype": "VFIOUSER", 00:09:37.739 "adrfam": "IPv4", 00:09:37.739 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:37.739 "trsvcid": "0" 00:09:37.739 } 00:09:37.739 ], 00:09:37.739 "allow_any_host": true, 00:09:37.739 "hosts": [], 00:09:37.739 "serial_number": "SPDK1", 00:09:37.739 "model_number": "SPDK bdev Controller", 00:09:37.739 "max_namespaces": 32, 00:09:37.739 "min_cntlid": 1, 00:09:37.739 "max_cntlid": 65519, 00:09:37.739 "namespaces": [ 00:09:37.739 { 00:09:37.739 "nsid": 1, 00:09:37.739 "bdev_name": "Malloc1", 00:09:37.739 "name": "Malloc1", 00:09:37.739 "nguid": "F77243D02190498396ADAD75931B4A24", 00:09:37.739 "uuid": "f77243d0-2190-4983-96ad-ad75931b4a24" 00:09:37.739 }, 00:09:37.739 { 00:09:37.739 "nsid": 2, 00:09:37.739 "bdev_name": "Malloc3", 00:09:37.739 "name": "Malloc3", 00:09:37.739 "nguid": "6C7D01CCBE534A03B557D12B72D0141E", 00:09:37.739 "uuid": "6c7d01cc-be53-4a03-b557-d12b72d0141e" 00:09:37.739 } 00:09:37.739 ] 00:09:37.739 }, 00:09:37.739 { 00:09:37.739 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:37.739 "subtype": "NVMe", 00:09:37.739 "listen_addresses": [ 00:09:37.739 { 00:09:37.739 "transport": "VFIOUSER", 00:09:37.739 "trtype": "VFIOUSER", 00:09:37.739 "adrfam": "IPv4", 00:09:37.739 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:37.739 "trsvcid": "0" 00:09:37.739 } 00:09:37.739 ], 00:09:37.739 "allow_any_host": true, 00:09:37.739 "hosts": [], 00:09:37.739 "serial_number": "SPDK2", 00:09:37.739 "model_number": "SPDK bdev Controller", 00:09:37.739 "max_namespaces": 32, 00:09:37.739 "min_cntlid": 1, 00:09:37.739 "max_cntlid": 65519, 00:09:37.739 "namespaces": [ 00:09:37.739 { 00:09:37.739 "nsid": 1, 00:09:37.739 "bdev_name": "Malloc2", 00:09:37.739 "name": "Malloc2", 00:09:37.739 "nguid": "518B087EFC58488C93DAAAD29AC99036", 00:09:37.739 "uuid": "518b087e-fc58-488c-93da-aad29ac99036" 00:09:37.739 } 00:09:37.739 ] 00:09:37.739 } 00:09:37.739 ] 00:09:37.739 03:23:15 -- target/nvmf_vfio_user.sh@44 -- # wait 200386 00:09:37.739 03:23:15 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:37.739 03:23:15 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:37.739 03:23:15 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:37.739 03:23:15 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:37.739 [2024-04-19 03:23:15.235135] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:09:37.739 [2024-04-19 03:23:15.235177] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200512 ] 00:09:37.739 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.739 [2024-04-19 03:23:15.269804] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:37.739 [2024-04-19 03:23:15.272125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:37.739 [2024-04-19 03:23:15.272154] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0048ae2000 00:09:37.739 [2024-04-19 03:23:15.273123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.274128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.275139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.276165] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.277160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.278166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.279174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.280180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.739 [2024-04-19 03:23:15.283393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:37.739 [2024-04-19 03:23:15.283420] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0048ad7000 00:09:37.739 [2024-04-19 03:23:15.284581] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:37.998 [2024-04-19 03:23:15.299504] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:37.998 [2024-04-19 03:23:15.299540] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:37.998 [2024-04-19 03:23:15.301643] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:37.998 [2024-04-19 03:23:15.301714] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:37.998 [2024-04-19 03:23:15.301804] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:37.998 [2024-04-19 03:23:15.301832] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:37.998 [2024-04-19 03:23:15.301842] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:37.998 [2024-04-19 03:23:15.303392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:37.998 [2024-04-19 03:23:15.303413] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:37.998 [2024-04-19 03:23:15.303426] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:37.998 [2024-04-19 03:23:15.303653] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:37.998 [2024-04-19 03:23:15.303689] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:37.998 [2024-04-19 03:23:15.303703] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:37.998 [2024-04-19 03:23:15.304658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:37.998 [2024-04-19 03:23:15.304678] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:37.998 [2024-04-19 03:23:15.305662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:37.998 [2024-04-19 03:23:15.305682] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:37.999 [2024-04-19 03:23:15.305692] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:37.999 [2024-04-19 03:23:15.305719] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:37.999 [2024-04-19 03:23:15.305828] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:37.999 [2024-04-19 03:23:15.305836] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:37.999 [2024-04-19 03:23:15.305845] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:37.999 [2024-04-19 03:23:15.306670] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:37.999 [2024-04-19 03:23:15.307673] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:37.999 [2024-04-19 03:23:15.308693] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:37.999 [2024-04-19 03:23:15.309669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:37.999 [2024-04-19 03:23:15.309770] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:37.999 [2024-04-19 03:23:15.314391] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:37.999 [2024-04-19 03:23:15.314412] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:37.999 [2024-04-19 03:23:15.314422] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.314446] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:37.999 [2024-04-19 03:23:15.314460] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.314485] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:37.999 [2024-04-19 03:23:15.314495] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.999 [2024-04-19 03:23:15.314514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.322397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.322420] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:37.999 [2024-04-19 03:23:15.322429] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:37.999 [2024-04-19 03:23:15.322437] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:37.999 [2024-04-19 03:23:15.322445] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:37.999 [2024-04-19 03:23:15.322453] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:37.999 [2024-04-19 03:23:15.322461] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:37.999 [2024-04-19 03:23:15.322469] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.322482] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.322498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.330395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.330424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.999 [2024-04-19 03:23:15.330440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.999 [2024-04-19 03:23:15.330453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.999 [2024-04-19 03:23:15.330470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.999 [2024-04-19 03:23:15.330480] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.330496] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.330512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.338393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.338422] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:37.999 [2024-04-19 03:23:15.338432] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.338449] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.338460] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.338475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.346407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.346476] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.346492] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.346506] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:37.999 [2024-04-19 03:23:15.346515] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:37.999 [2024-04-19 03:23:15.346526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.354392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.354416] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:37.999 [2024-04-19 03:23:15.354438] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.354460] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.354473] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:37.999 [2024-04-19 03:23:15.354482] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.999 [2024-04-19 03:23:15.354492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.362391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.362421] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.362437] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.362465] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:37.999 [2024-04-19 03:23:15.362475] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.999 [2024-04-19 03:23:15.362485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.370391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.370413] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.370426] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.370441] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.370455] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.370464] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.370472] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:37.999 [2024-04-19 03:23:15.370480] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:37.999 [2024-04-19 03:23:15.370489] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:37.999 [2024-04-19 03:23:15.370515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.377416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.377454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:37.999 [2024-04-19 03:23:15.386393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:37.999 [2024-04-19 03:23:15.386419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:38.000 [2024-04-19 03:23:15.394408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:38.000 [2024-04-19 03:23:15.394432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:38.000 [2024-04-19 03:23:15.402393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:38.000 [2024-04-19 03:23:15.402419] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:38.000 [2024-04-19 03:23:15.402429] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:38.000 [2024-04-19 03:23:15.402436] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:38.000 [2024-04-19 03:23:15.402442] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:38.000 [2024-04-19 03:23:15.402452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:38.000 [2024-04-19 03:23:15.402464] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:38.000 [2024-04-19 03:23:15.402477] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:38.000 [2024-04-19 03:23:15.402487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:38.000 [2024-04-19 03:23:15.402498] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:38.000 [2024-04-19 03:23:15.402506] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:38.000 [2024-04-19 03:23:15.402516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:38.000 [2024-04-19 03:23:15.402528] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:38.000 [2024-04-19 03:23:15.402536] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:38.000 [2024-04-19 03:23:15.402545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:38.000 [2024-04-19 03:23:15.410408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:38.000 [2024-04-19 03:23:15.410437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:38.000 [2024-04-19 03:23:15.410454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:38.000 [2024-04-19 03:23:15.410466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:38.000 ===================================================== 00:09:38.000 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:38.000 ===================================================== 00:09:38.000 Controller Capabilities/Features 00:09:38.000 ================================ 00:09:38.000 Vendor ID: 4e58 00:09:38.000 Subsystem Vendor ID: 4e58 00:09:38.000 Serial Number: SPDK2 00:09:38.000 Model Number: SPDK bdev Controller 00:09:38.000 Firmware Version: 24.05 00:09:38.000 Recommended Arb Burst: 6 00:09:38.000 IEEE OUI Identifier: 8d 6b 50 00:09:38.000 Multi-path I/O 00:09:38.000 May have multiple subsystem ports: Yes 00:09:38.000 May have multiple controllers: Yes 00:09:38.000 Associated with SR-IOV VF: No 00:09:38.000 Max Data Transfer Size: 131072 00:09:38.000 Max Number of Namespaces: 32 00:09:38.000 Max Number of I/O Queues: 127 00:09:38.000 NVMe Specification Version (VS): 1.3 00:09:38.000 NVMe Specification Version (Identify): 1.3 00:09:38.000 Maximum Queue Entries: 256 00:09:38.000 Contiguous Queues Required: Yes 00:09:38.000 Arbitration Mechanisms Supported 00:09:38.000 Weighted Round Robin: Not Supported 00:09:38.000 Vendor Specific: Not Supported 00:09:38.000 Reset Timeout: 15000 ms 00:09:38.000 Doorbell Stride: 4 bytes 00:09:38.000 NVM Subsystem Reset: Not Supported 00:09:38.000 Command Sets Supported 00:09:38.000 NVM Command Set: Supported 00:09:38.000 Boot Partition: Not Supported 00:09:38.000 Memory Page Size Minimum: 4096 bytes 00:09:38.000 Memory Page Size Maximum: 4096 bytes 00:09:38.000 Persistent Memory Region: Not Supported 00:09:38.000 Optional Asynchronous Events Supported 00:09:38.000 Namespace Attribute Notices: Supported 00:09:38.000 Firmware Activation Notices: Not Supported 00:09:38.000 ANA Change Notices: Not Supported 00:09:38.000 PLE Aggregate Log Change Notices: Not Supported 00:09:38.000 LBA Status Info Alert Notices: Not Supported 00:09:38.000 EGE Aggregate Log Change Notices: Not Supported 00:09:38.000 Normal NVM Subsystem Shutdown event: Not Supported 00:09:38.000 Zone Descriptor Change Notices: Not Supported 00:09:38.000 Discovery Log Change Notices: Not Supported 00:09:38.000 Controller Attributes 00:09:38.000 128-bit Host Identifier: Supported 00:09:38.000 Non-Operational Permissive Mode: Not Supported 00:09:38.000 NVM Sets: Not Supported 00:09:38.000 Read Recovery Levels: Not Supported 00:09:38.000 Endurance Groups: Not Supported 00:09:38.000 Predictable Latency Mode: Not Supported 00:09:38.000 Traffic Based Keep ALive: Not Supported 00:09:38.000 Namespace Granularity: Not Supported 00:09:38.000 SQ Associations: Not Supported 00:09:38.000 UUID List: Not Supported 00:09:38.000 Multi-Domain Subsystem: Not Supported 00:09:38.000 Fixed Capacity Management: Not Supported 00:09:38.000 Variable Capacity Management: Not Supported 00:09:38.000 Delete Endurance Group: Not Supported 00:09:38.000 Delete NVM Set: Not Supported 00:09:38.000 Extended LBA Formats Supported: Not Supported 00:09:38.000 Flexible Data Placement Supported: Not Supported 00:09:38.000 00:09:38.000 Controller Memory Buffer Support 00:09:38.000 ================================ 00:09:38.000 Supported: No 00:09:38.000 00:09:38.000 Persistent Memory Region Support 00:09:38.000 ================================ 00:09:38.000 Supported: No 00:09:38.000 00:09:38.000 Admin Command Set Attributes 00:09:38.000 ============================ 00:09:38.000 Security Send/Receive: Not Supported 00:09:38.000 Format NVM: Not Supported 00:09:38.000 Firmware Activate/Download: Not Supported 00:09:38.000 Namespace Management: Not Supported 00:09:38.000 Device Self-Test: Not Supported 00:09:38.000 Directives: Not Supported 00:09:38.000 NVMe-MI: Not Supported 00:09:38.000 Virtualization Management: Not Supported 00:09:38.000 Doorbell Buffer Config: Not Supported 00:09:38.000 Get LBA Status Capability: Not Supported 00:09:38.000 Command & Feature Lockdown Capability: Not Supported 00:09:38.000 Abort Command Limit: 4 00:09:38.000 Async Event Request Limit: 4 00:09:38.000 Number of Firmware Slots: N/A 00:09:38.000 Firmware Slot 1 Read-Only: N/A 00:09:38.000 Firmware Activation Without Reset: N/A 00:09:38.000 Multiple Update Detection Support: N/A 00:09:38.000 Firmware Update Granularity: No Information Provided 00:09:38.000 Per-Namespace SMART Log: No 00:09:38.000 Asymmetric Namespace Access Log Page: Not Supported 00:09:38.000 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:38.000 Command Effects Log Page: Supported 00:09:38.000 Get Log Page Extended Data: Supported 00:09:38.000 Telemetry Log Pages: Not Supported 00:09:38.000 Persistent Event Log Pages: Not Supported 00:09:38.000 Supported Log Pages Log Page: May Support 00:09:38.000 Commands Supported & Effects Log Page: Not Supported 00:09:38.000 Feature Identifiers & Effects Log Page:May Support 00:09:38.000 NVMe-MI Commands & Effects Log Page: May Support 00:09:38.000 Data Area 4 for Telemetry Log: Not Supported 00:09:38.000 Error Log Page Entries Supported: 128 00:09:38.000 Keep Alive: Supported 00:09:38.000 Keep Alive Granularity: 10000 ms 00:09:38.000 00:09:38.000 NVM Command Set Attributes 00:09:38.000 ========================== 00:09:38.000 Submission Queue Entry Size 00:09:38.000 Max: 64 00:09:38.000 Min: 64 00:09:38.000 Completion Queue Entry Size 00:09:38.000 Max: 16 00:09:38.000 Min: 16 00:09:38.000 Number of Namespaces: 32 00:09:38.000 Compare Command: Supported 00:09:38.000 Write Uncorrectable Command: Not Supported 00:09:38.000 Dataset Management Command: Supported 00:09:38.000 Write Zeroes Command: Supported 00:09:38.000 Set Features Save Field: Not Supported 00:09:38.000 Reservations: Not Supported 00:09:38.000 Timestamp: Not Supported 00:09:38.000 Copy: Supported 00:09:38.000 Volatile Write Cache: Present 00:09:38.000 Atomic Write Unit (Normal): 1 00:09:38.000 Atomic Write Unit (PFail): 1 00:09:38.000 Atomic Compare & Write Unit: 1 00:09:38.000 Fused Compare & Write: Supported 00:09:38.000 Scatter-Gather List 00:09:38.000 SGL Command Set: Supported (Dword aligned) 00:09:38.000 SGL Keyed: Not Supported 00:09:38.000 SGL Bit Bucket Descriptor: Not Supported 00:09:38.000 SGL Metadata Pointer: Not Supported 00:09:38.001 Oversized SGL: Not Supported 00:09:38.001 SGL Metadata Address: Not Supported 00:09:38.001 SGL Offset: Not Supported 00:09:38.001 Transport SGL Data Block: Not Supported 00:09:38.001 Replay Protected Memory Block: Not Supported 00:09:38.001 00:09:38.001 Firmware Slot Information 00:09:38.001 ========================= 00:09:38.001 Active slot: 1 00:09:38.001 Slot 1 Firmware Revision: 24.05 00:09:38.001 00:09:38.001 00:09:38.001 Commands Supported and Effects 00:09:38.001 ============================== 00:09:38.001 Admin Commands 00:09:38.001 -------------- 00:09:38.001 Get Log Page (02h): Supported 00:09:38.001 Identify (06h): Supported 00:09:38.001 Abort (08h): Supported 00:09:38.001 Set Features (09h): Supported 00:09:38.001 Get Features (0Ah): Supported 00:09:38.001 Asynchronous Event Request (0Ch): Supported 00:09:38.001 Keep Alive (18h): Supported 00:09:38.001 I/O Commands 00:09:38.001 ------------ 00:09:38.001 Flush (00h): Supported LBA-Change 00:09:38.001 Write (01h): Supported LBA-Change 00:09:38.001 Read (02h): Supported 00:09:38.001 Compare (05h): Supported 00:09:38.001 Write Zeroes (08h): Supported LBA-Change 00:09:38.001 Dataset Management (09h): Supported LBA-Change 00:09:38.001 Copy (19h): Supported LBA-Change 00:09:38.001 Unknown (79h): Supported LBA-Change 00:09:38.001 Unknown (7Ah): Supported 00:09:38.001 00:09:38.001 Error Log 00:09:38.001 ========= 00:09:38.001 00:09:38.001 Arbitration 00:09:38.001 =========== 00:09:38.001 Arbitration Burst: 1 00:09:38.001 00:09:38.001 Power Management 00:09:38.001 ================ 00:09:38.001 Number of Power States: 1 00:09:38.001 Current Power State: Power State #0 00:09:38.001 Power State #0: 00:09:38.001 Max Power: 0.00 W 00:09:38.001 Non-Operational State: Operational 00:09:38.001 Entry Latency: Not Reported 00:09:38.001 Exit Latency: Not Reported 00:09:38.001 Relative Read Throughput: 0 00:09:38.001 Relative Read Latency: 0 00:09:38.001 Relative Write Throughput: 0 00:09:38.001 Relative Write Latency: 0 00:09:38.001 Idle Power: Not Reported 00:09:38.001 Active Power: Not Reported 00:09:38.001 Non-Operational Permissive Mode: Not Supported 00:09:38.001 00:09:38.001 Health Information 00:09:38.001 ================== 00:09:38.001 Critical Warnings: 00:09:38.001 Available Spare Space: OK 00:09:38.001 Temperature: OK 00:09:38.001 Device Reliability: OK 00:09:38.001 Read Only: No 00:09:38.001 Volatile Memory Backup: OK 00:09:38.001 Current Temperature: 0 Kelvin (-2[2024-04-19 03:23:15.410597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:38.001 [2024-04-19 03:23:15.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:38.001 [2024-04-19 03:23:15.418452] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:38.001 [2024-04-19 03:23:15.418469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.001 [2024-04-19 03:23:15.418480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.001 [2024-04-19 03:23:15.418490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.001 [2024-04-19 03:23:15.418500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.001 [2024-04-19 03:23:15.418579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:38.001 [2024-04-19 03:23:15.418601] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:38.001 [2024-04-19 03:23:15.419591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:38.001 [2024-04-19 03:23:15.419665] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:38.001 [2024-04-19 03:23:15.419680] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:38.001 [2024-04-19 03:23:15.420598] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:38.001 [2024-04-19 03:23:15.420623] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:38.001 [2024-04-19 03:23:15.420703] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:38.001 [2024-04-19 03:23:15.421941] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:38.001 73 Celsius) 00:09:38.001 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:38.001 Available Spare: 0% 00:09:38.001 Available Spare Threshold: 0% 00:09:38.001 Life Percentage Used: 0% 00:09:38.001 Data Units Read: 0 00:09:38.001 Data Units Written: 0 00:09:38.001 Host Read Commands: 0 00:09:38.001 Host Write Commands: 0 00:09:38.001 Controller Busy Time: 0 minutes 00:09:38.001 Power Cycles: 0 00:09:38.001 Power On Hours: 0 hours 00:09:38.001 Unsafe Shutdowns: 0 00:09:38.001 Unrecoverable Media Errors: 0 00:09:38.001 Lifetime Error Log Entries: 0 00:09:38.001 Warning Temperature Time: 0 minutes 00:09:38.001 Critical Temperature Time: 0 minutes 00:09:38.001 00:09:38.001 Number of Queues 00:09:38.001 ================ 00:09:38.001 Number of I/O Submission Queues: 127 00:09:38.001 Number of I/O Completion Queues: 127 00:09:38.001 00:09:38.001 Active Namespaces 00:09:38.001 ================= 00:09:38.001 Namespace ID:1 00:09:38.001 Error Recovery Timeout: Unlimited 00:09:38.001 Command Set Identifier: NVM (00h) 00:09:38.001 Deallocate: Supported 00:09:38.001 Deallocated/Unwritten Error: Not Supported 00:09:38.001 Deallocated Read Value: Unknown 00:09:38.001 Deallocate in Write Zeroes: Not Supported 00:09:38.001 Deallocated Guard Field: 0xFFFF 00:09:38.001 Flush: Supported 00:09:38.001 Reservation: Supported 00:09:38.001 Namespace Sharing Capabilities: Multiple Controllers 00:09:38.001 Size (in LBAs): 131072 (0GiB) 00:09:38.001 Capacity (in LBAs): 131072 (0GiB) 00:09:38.001 Utilization (in LBAs): 131072 (0GiB) 00:09:38.001 NGUID: 518B087EFC58488C93DAAAD29AC99036 00:09:38.001 UUID: 518b087e-fc58-488c-93da-aad29ac99036 00:09:38.001 Thin Provisioning: Not Supported 00:09:38.001 Per-NS Atomic Units: Yes 00:09:38.001 Atomic Boundary Size (Normal): 0 00:09:38.001 Atomic Boundary Size (PFail): 0 00:09:38.001 Atomic Boundary Offset: 0 00:09:38.001 Maximum Single Source Range Length: 65535 00:09:38.001 Maximum Copy Length: 65535 00:09:38.001 Maximum Source Range Count: 1 00:09:38.001 NGUID/EUI64 Never Reused: No 00:09:38.001 Namespace Write Protected: No 00:09:38.001 Number of LBA Formats: 1 00:09:38.001 Current LBA Format: LBA Format #00 00:09:38.001 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:38.001 00:09:38.001 03:23:15 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:38.001 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.258 [2024-04-19 03:23:15.651197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:43.521 [2024-04-19 03:23:20.754719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:43.521 Initializing NVMe Controllers 00:09:43.521 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:43.521 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:43.521 Initialization complete. Launching workers. 00:09:43.521 ======================================================== 00:09:43.521 Latency(us) 00:09:43.521 Device Information : IOPS MiB/s Average min max 00:09:43.521 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33201.22 129.69 3854.29 1176.77 7438.20 00:09:43.521 ======================================================== 00:09:43.521 Total : 33201.22 129.69 3854.29 1176.77 7438.20 00:09:43.521 00:09:43.521 03:23:20 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:43.521 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.521 [2024-04-19 03:23:20.975310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:48.785 [2024-04-19 03:23:25.994747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:48.785 Initializing NVMe Controllers 00:09:48.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:48.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:48.785 Initialization complete. Launching workers. 00:09:48.785 ======================================================== 00:09:48.785 Latency(us) 00:09:48.785 Device Information : IOPS MiB/s Average min max 00:09:48.785 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32598.63 127.34 3925.93 1212.31 7460.14 00:09:48.785 ======================================================== 00:09:48.785 Total : 32598.63 127.34 3925.93 1212.31 7460.14 00:09:48.785 00:09:48.785 03:23:26 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:48.785 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.785 [2024-04-19 03:23:26.197652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:54.055 [2024-04-19 03:23:31.325520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:54.055 Initializing NVMe Controllers 00:09:54.055 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:54.055 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:54.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:09:54.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:09:54.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:09:54.055 Initialization complete. Launching workers. 00:09:54.055 Starting thread on core 2 00:09:54.055 Starting thread on core 3 00:09:54.055 Starting thread on core 1 00:09:54.055 03:23:31 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:09:54.055 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.354 [2024-04-19 03:23:31.630923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:57.638 [2024-04-19 03:23:34.710388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:57.638 Initializing NVMe Controllers 00:09:57.638 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:09:57.638 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:09:57.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:09:57.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:09:57.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:09:57.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:09:57.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:57.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:57.639 Initialization complete. Launching workers. 00:09:57.639 Starting thread on core 1 with urgent priority queue 00:09:57.639 Starting thread on core 2 with urgent priority queue 00:09:57.639 Starting thread on core 3 with urgent priority queue 00:09:57.639 Starting thread on core 0 with urgent priority queue 00:09:57.639 SPDK bdev Controller (SPDK2 ) core 0: 4804.00 IO/s 20.82 secs/100000 ios 00:09:57.639 SPDK bdev Controller (SPDK2 ) core 1: 5456.33 IO/s 18.33 secs/100000 ios 00:09:57.639 SPDK bdev Controller (SPDK2 ) core 2: 5418.33 IO/s 18.46 secs/100000 ios 00:09:57.639 SPDK bdev Controller (SPDK2 ) core 3: 5077.00 IO/s 19.70 secs/100000 ios 00:09:57.639 ======================================================== 00:09:57.639 00:09:57.639 03:23:34 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:09:57.639 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.639 [2024-04-19 03:23:35.007901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:57.639 [2024-04-19 03:23:35.021043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:57.639 Initializing NVMe Controllers 00:09:57.639 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:09:57.639 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:09:57.639 Namespace ID: 1 size: 0GB 00:09:57.639 Initialization complete. 00:09:57.639 INFO: using host memory buffer for IO 00:09:57.639 Hello world! 00:09:57.639 03:23:35 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:09:57.639 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.896 [2024-04-19 03:23:35.316207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:59.271 Initializing NVMe Controllers 00:09:59.271 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:09:59.271 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:09:59.271 Initialization complete. Launching workers. 00:09:59.271 submit (in ns) avg, min, max = 5699.5, 3450.0, 4013908.9 00:09:59.271 complete (in ns) avg, min, max = 26276.4, 2034.4, 6992271.1 00:09:59.271 00:09:59.271 Submit histogram 00:09:59.271 ================ 00:09:59.271 Range in us Cumulative Count 00:09:59.271 3.437 - 3.461: 0.0073% ( 1) 00:09:59.271 3.461 - 3.484: 0.1688% ( 22) 00:09:59.271 3.484 - 3.508: 0.6093% ( 60) 00:09:59.271 3.508 - 3.532: 2.1728% ( 213) 00:09:59.271 3.532 - 3.556: 5.2265% ( 416) 00:09:59.271 3.556 - 3.579: 11.1649% ( 809) 00:09:59.271 3.579 - 3.603: 18.8431% ( 1046) 00:09:59.271 3.603 - 3.627: 28.9510% ( 1377) 00:09:59.271 3.627 - 3.650: 38.6552% ( 1322) 00:09:59.271 3.650 - 3.674: 48.1098% ( 1288) 00:09:59.271 3.674 - 3.698: 54.7016% ( 898) 00:09:59.271 3.698 - 3.721: 60.8236% ( 834) 00:09:59.271 3.721 - 3.745: 65.0004% ( 569) 00:09:59.271 3.745 - 3.769: 68.6119% ( 492) 00:09:59.271 3.769 - 3.793: 71.7977% ( 434) 00:09:59.271 3.793 - 3.816: 74.9248% ( 426) 00:09:59.271 3.816 - 3.840: 78.3234% ( 463) 00:09:59.271 3.840 - 3.864: 82.1478% ( 521) 00:09:59.271 3.864 - 3.887: 85.2822% ( 427) 00:09:59.271 3.887 - 3.911: 87.5651% ( 311) 00:09:59.271 3.911 - 3.935: 89.5985% ( 277) 00:09:59.271 3.935 - 3.959: 91.1840% ( 216) 00:09:59.271 3.959 - 3.982: 92.6375% ( 198) 00:09:59.271 3.982 - 4.006: 93.7312% ( 149) 00:09:59.271 4.006 - 4.030: 94.5607% ( 113) 00:09:59.271 4.030 - 4.053: 95.2360% ( 92) 00:09:59.271 4.053 - 4.077: 95.9407% ( 96) 00:09:59.271 4.077 - 4.101: 96.4398% ( 68) 00:09:59.271 4.101 - 4.124: 96.8729% ( 59) 00:09:59.271 4.124 - 4.148: 97.0564% ( 25) 00:09:59.271 4.148 - 4.172: 97.2033% ( 20) 00:09:59.271 4.172 - 4.196: 97.3060% ( 14) 00:09:59.271 4.196 - 4.219: 97.3941% ( 12) 00:09:59.271 4.219 - 4.243: 97.4675% ( 10) 00:09:59.271 4.243 - 4.267: 97.5996% ( 18) 00:09:59.271 4.267 - 4.290: 97.6804% ( 11) 00:09:59.271 4.290 - 4.314: 97.7538% ( 10) 00:09:59.271 4.314 - 4.338: 97.7832% ( 4) 00:09:59.271 4.338 - 4.361: 97.8199% ( 5) 00:09:59.271 4.361 - 4.385: 97.8639% ( 6) 00:09:59.271 4.409 - 4.433: 97.8786% ( 2) 00:09:59.271 4.433 - 4.456: 97.8933% ( 2) 00:09:59.271 4.480 - 4.504: 97.9006% ( 1) 00:09:59.271 4.527 - 4.551: 97.9153% ( 2) 00:09:59.271 4.551 - 4.575: 97.9226% ( 1) 00:09:59.271 4.599 - 4.622: 97.9520% ( 4) 00:09:59.271 4.622 - 4.646: 97.9667% ( 2) 00:09:59.271 4.646 - 4.670: 97.9814% ( 2) 00:09:59.271 4.670 - 4.693: 97.9960% ( 2) 00:09:59.271 4.693 - 4.717: 98.0034% ( 1) 00:09:59.271 4.717 - 4.741: 98.0327% ( 4) 00:09:59.271 4.741 - 4.764: 98.0841% ( 7) 00:09:59.271 4.764 - 4.788: 98.1355% ( 7) 00:09:59.271 4.788 - 4.812: 98.1649% ( 4) 00:09:59.271 4.812 - 4.836: 98.1942% ( 4) 00:09:59.271 4.836 - 4.859: 98.2089% ( 2) 00:09:59.271 4.859 - 4.883: 98.2456% ( 5) 00:09:59.271 4.883 - 4.907: 98.3043% ( 8) 00:09:59.271 4.907 - 4.930: 98.3190% ( 2) 00:09:59.271 4.930 - 4.954: 98.3631% ( 6) 00:09:59.271 4.954 - 4.978: 98.3998% ( 5) 00:09:59.271 4.978 - 5.001: 98.4291% ( 4) 00:09:59.271 5.001 - 5.025: 98.4585% ( 4) 00:09:59.271 5.025 - 5.049: 98.5025% ( 6) 00:09:59.271 5.049 - 5.073: 98.5392% ( 5) 00:09:59.271 5.073 - 5.096: 98.5539% ( 2) 00:09:59.271 5.096 - 5.120: 98.5833% ( 4) 00:09:59.271 5.120 - 5.144: 98.6200% ( 5) 00:09:59.271 5.144 - 5.167: 98.6420% ( 3) 00:09:59.271 5.191 - 5.215: 98.6567% ( 2) 00:09:59.271 5.215 - 5.239: 98.6640% ( 1) 00:09:59.271 5.239 - 5.262: 98.6860% ( 3) 00:09:59.271 5.262 - 5.286: 98.7007% ( 2) 00:09:59.271 5.333 - 5.357: 98.7081% ( 1) 00:09:59.271 5.357 - 5.381: 98.7154% ( 1) 00:09:59.271 5.452 - 5.476: 98.7227% ( 1) 00:09:59.271 5.547 - 5.570: 98.7301% ( 1) 00:09:59.271 5.665 - 5.689: 98.7374% ( 1) 00:09:59.271 5.689 - 5.713: 98.7448% ( 1) 00:09:59.271 5.855 - 5.879: 98.7521% ( 1) 00:09:59.271 6.163 - 6.210: 98.7595% ( 1) 00:09:59.271 6.258 - 6.305: 98.7741% ( 2) 00:09:59.271 6.305 - 6.353: 98.7815% ( 1) 00:09:59.271 6.353 - 6.400: 98.7962% ( 2) 00:09:59.271 6.400 - 6.447: 98.8035% ( 1) 00:09:59.271 6.447 - 6.495: 98.8108% ( 1) 00:09:59.271 6.495 - 6.542: 98.8182% ( 1) 00:09:59.271 6.542 - 6.590: 98.8329% ( 2) 00:09:59.272 6.590 - 6.637: 98.8402% ( 1) 00:09:59.272 6.684 - 6.732: 98.8475% ( 1) 00:09:59.272 6.969 - 7.016: 98.8549% ( 1) 00:09:59.272 7.111 - 7.159: 98.8622% ( 1) 00:09:59.272 7.159 - 7.206: 98.8769% ( 2) 00:09:59.272 7.206 - 7.253: 98.8842% ( 1) 00:09:59.272 7.253 - 7.301: 98.9063% ( 3) 00:09:59.272 7.538 - 7.585: 98.9209% ( 2) 00:09:59.272 7.585 - 7.633: 98.9283% ( 1) 00:09:59.272 7.680 - 7.727: 98.9503% ( 3) 00:09:59.272 7.822 - 7.870: 98.9650% ( 2) 00:09:59.272 8.012 - 8.059: 98.9723% ( 1) 00:09:59.272 8.059 - 8.107: 98.9797% ( 1) 00:09:59.272 8.154 - 8.201: 98.9943% ( 2) 00:09:59.272 8.249 - 8.296: 99.0017% ( 1) 00:09:59.272 8.296 - 8.344: 99.0090% ( 1) 00:09:59.272 8.439 - 8.486: 99.0164% ( 1) 00:09:59.272 8.533 - 8.581: 99.0311% ( 2) 00:09:59.272 8.723 - 8.770: 99.0384% ( 1) 00:09:59.272 8.865 - 8.913: 99.0457% ( 1) 00:09:59.272 9.244 - 9.292: 99.0531% ( 1) 00:09:59.272 9.339 - 9.387: 99.0604% ( 1) 00:09:59.272 9.624 - 9.671: 99.0678% ( 1) 00:09:59.272 9.766 - 9.813: 99.0751% ( 1) 00:09:59.272 9.813 - 9.861: 99.0824% ( 1) 00:09:59.272 10.335 - 10.382: 99.0898% ( 1) 00:09:59.272 10.382 - 10.430: 99.0971% ( 1) 00:09:59.272 10.761 - 10.809: 99.1045% ( 1) 00:09:59.272 10.999 - 11.046: 99.1118% ( 1) 00:09:59.272 11.520 - 11.567: 99.1191% ( 1) 00:09:59.272 11.615 - 11.662: 99.1265% ( 1) 00:09:59.272 12.041 - 12.089: 99.1338% ( 1) 00:09:59.272 12.136 - 12.231: 99.1412% ( 1) 00:09:59.272 12.895 - 12.990: 99.1485% ( 1) 00:09:59.272 13.843 - 13.938: 99.1558% ( 1) 00:09:59.272 13.938 - 14.033: 99.1705% ( 2) 00:09:59.272 14.033 - 14.127: 99.1779% ( 1) 00:09:59.272 15.929 - 16.024: 99.1852% ( 1) 00:09:59.272 17.067 - 17.161: 99.1999% ( 2) 00:09:59.272 17.161 - 17.256: 99.2072% ( 1) 00:09:59.272 17.256 - 17.351: 99.2146% ( 1) 00:09:59.272 17.351 - 17.446: 99.2219% ( 1) 00:09:59.272 17.446 - 17.541: 99.2366% ( 2) 00:09:59.272 17.541 - 17.636: 99.2733% ( 5) 00:09:59.272 17.636 - 17.730: 99.3394% ( 9) 00:09:59.272 17.730 - 17.825: 99.3834% ( 6) 00:09:59.272 17.825 - 17.920: 99.4274% ( 6) 00:09:59.272 17.920 - 18.015: 99.4641% ( 5) 00:09:59.272 18.015 - 18.110: 99.5155% ( 7) 00:09:59.272 18.110 - 18.204: 99.5669% ( 7) 00:09:59.272 18.204 - 18.299: 99.6550% ( 12) 00:09:59.272 18.299 - 18.394: 99.7357% ( 11) 00:09:59.272 18.394 - 18.489: 99.7724% ( 5) 00:09:59.272 18.489 - 18.584: 99.8165% ( 6) 00:09:59.272 18.584 - 18.679: 99.8238% ( 1) 00:09:59.272 18.679 - 18.773: 99.8385% ( 2) 00:09:59.272 18.773 - 18.868: 99.8605% ( 3) 00:09:59.272 18.963 - 19.058: 99.8679% ( 1) 00:09:59.272 19.058 - 19.153: 99.8899% ( 3) 00:09:59.272 19.153 - 19.247: 99.8972% ( 1) 00:09:59.272 19.247 - 19.342: 99.9046% ( 1) 00:09:59.272 19.342 - 19.437: 99.9119% ( 1) 00:09:59.272 19.437 - 19.532: 99.9193% ( 1) 00:09:59.272 19.721 - 19.816: 99.9266% ( 1) 00:09:59.272 20.196 - 20.290: 99.9339% ( 1) 00:09:59.272 21.713 - 21.807: 99.9413% ( 1) 00:09:59.272 22.756 - 22.850: 99.9486% ( 1) 00:09:59.272 1043.721 - 1049.790: 99.9560% ( 1) 00:09:59.272 3980.705 - 4004.978: 99.9927% ( 5) 00:09:59.272 4004.978 - 4029.250: 100.0000% ( 1) 00:09:59.272 00:09:59.272 Complete histogram 00:09:59.272 ================== 00:09:59.272 Range in us Cumulative Count 00:09:59.272 2.027 - 2.039: 0.2422% ( 33) 00:09:59.272 2.039 - 2.050: 10.5190% ( 1400) 00:09:59.272 2.050 - 2.062: 14.6003% ( 556) 00:09:59.272 2.062 - 2.074: 21.5151% ( 942) 00:09:59.272 2.074 - 2.086: 55.6485% ( 4650) 00:09:59.272 2.086 - 2.098: 62.3798% ( 917) 00:09:59.272 2.098 - 2.110: 65.2940% ( 397) 00:09:59.272 2.110 - 2.121: 70.0874% ( 653) 00:09:59.272 2.121 - 2.133: 70.6819% ( 81) 00:09:59.272 2.133 - 2.145: 77.1049% ( 875) 00:09:59.272 2.145 - 2.157: 87.0146% ( 1350) 00:09:59.272 2.157 - 2.169: 88.5928% ( 215) 00:09:59.272 2.169 - 2.181: 89.5104% ( 125) 00:09:59.272 2.181 - 2.193: 90.8610% ( 184) 00:09:59.272 2.193 - 2.204: 91.6245% ( 104) 00:09:59.272 2.204 - 2.216: 92.5640% ( 128) 00:09:59.272 2.216 - 2.228: 94.6634% ( 286) 00:09:59.272 2.228 - 2.240: 95.3241% ( 90) 00:09:59.272 2.240 - 2.252: 95.6471% ( 44) 00:09:59.272 2.252 - 2.264: 95.8673% ( 30) 00:09:59.272 2.264 - 2.276: 96.0068% ( 19) 00:09:59.272 2.276 - 2.287: 96.0508% ( 6) 00:09:59.272 2.287 - 2.299: 96.1609% ( 15) 00:09:59.272 2.299 - 2.311: 96.2784% ( 16) 00:09:59.272 2.311 - 2.323: 96.4325% ( 21) 00:09:59.272 2.323 - 2.335: 96.5793% ( 20) 00:09:59.272 2.335 - 2.347: 96.8069% ( 31) 00:09:59.272 2.347 - 2.359: 97.0491% ( 33) 00:09:59.272 2.359 - 2.370: 97.4088% ( 49) 00:09:59.272 2.370 - 2.382: 97.7391% ( 45) 00:09:59.272 2.382 - 2.394: 98.0181% ( 38) 00:09:59.272 2.394 - 2.406: 98.2309% ( 29) 00:09:59.272 2.406 - 2.418: 98.3190% ( 12) 00:09:59.272 2.418 - 2.430: 98.3851% ( 9) 00:09:59.272 2.430 - 2.441: 98.4511% ( 9) 00:09:59.272 2.441 - 2.453: 98.4805% ( 4) 00:09:59.272 2.453 - 2.465: 98.5319% ( 7) 00:09:59.272 2.465 - 2.477: 98.5466% ( 2) 00:09:59.272 2.477 - 2.489: 98.5613% ( 2) 00:09:59.272 2.489 - 2.501: 98.5759% ( 2) 00:09:59.272 2.501 - 2.513: 98.5906% ( 2) 00:09:59.272 2.513 - 2.524: 98.5980% ( 1) 00:09:59.272 2.524 - 2.536: 98.6053% ( 1) 00:09:59.272 2.536 - 2.548: 98.6200% ( 2) 00:09:59.272 2.667 - 2.679: 98.6273% ( 1) 00:09:59.272 2.690 - 2.702: 98.6347% ( 1) 00:09:59.272 2.738 - 2.750: 98.6420% ( 1) 00:09:59.272 2.750 - 2.761: 98.6493% ( 1) 00:09:59.272 2.773 - 2.785: 98.6567% ( 1) 00:09:59.272 2.797 - 2.809: 98.6640% ( 1) 00:09:59.272 3.461 - 3.484: 98.6860% ( 3) 00:09:59.272 3.532 - 3.556: 98.7154% ( 4) 00:09:59.272 3.556 - 3.579: 98.7227% ( 1) 00:09:59.272 3.627 - 3.650: 98.7301% ( 1) 00:09:59.272 3.674 - 3.698: 98.7521% ( 3) 00:09:59.272 3.698 - 3.721: 98.7595% ( 1) 00:09:59.272 3.721 - 3.745: 98.7668% ( 1) 00:09:59.272 3.745 - 3.769: 98.7741% ( 1) 00:09:59.272 3.816 - 3.840: 98.7815% ( 1) 00:09:59.272 3.864 - 3.887: 98.7888% ( 1) 00:09:59.272 3.911 - 3.935: 98.8035% ( 2) 00:09:59.272 4.030 - 4.053: 98.8108% ( 1) 00:09:59.272 4.053 - 4.077: 98.8182% ( 1) 00:09:59.272 4.575 - 4.599: 98.8255% ( 1) 00:09:59.272 4.670 - 4.693: 98.8329% ( 1) 00:09:59.272 4.717 - 4.741: 98.8402% ( 1) 00:09:59.272 4.836 - 4.859: 98.8475% ( 1) 00:09:59.272 4.907 - 4.930: 98.8549% ( 1) 00:09:59.272 4.930 - 4.954: 98.8622% ( 1) 00:09:59.272 4.954 - 4.978: 98.8696% ( 1) 00:09:59.272 5.073 - 5.096: 98.8769% ( 1) 00:09:59.272 5.191 - 5.215: 98.8842% ( 1) 00:09:59.272 5.262 - 5.286: 98.8916% ( 1) 00:09:59.272 5.381 - 5.404: 98.8989% ( 1) 00:09:59.272 5.760 - 5.784: 98.9063% ( 1) 00:09:59.272 5.807 - 5.831: 98.9136% ( 1) 00:09:59.272 5.950 - 5.973: 98.9209% ( 1) 00:09:59.272 6.116 - 6.163: 98.9283% ( 1) 00:09:59.272 6.637 - 6.684: 98.9430% ( 2) 00:09:59.272 7.348 - 7.396: 98.9503% ( 1) 00:09:59.272 8.533 - 8.581: 98.9576% ( 1) 00:09:59.272 10.193 - 10.240: 98.9650% ( 1) 00:09:59.272 11.425 - 11.473: 98.9723% ( 1) 00:09:59.272 11.662 - 11.710: 98.9797% ( 1) 00:09:59.272 11.757 - 11.804: 98.9870% ( 1) 00:09:59.272 15.360 - 15.455: 98.9943% ( 1) 00:09:59.272 15.550 - 15.644: 99.0090% ( 2) 00:09:59.272 15.644 - 15.739: 99.0237% ( 2) 00:09:59.272 15.739 - 15.834: 99.0457% ( 3) 00:09:59.272 15.834 - 15.929: 99.0604% ( 2) 00:09:59.272 15.929 - 16.024: 99.0824% ( 3) 00:09:59.272 16.024 - 16.119: 99.0971% ( 2) 00:09:59.272 16.119 - 16.213: 99.1118% ( 2) 00:09:59.272 16.213 - 16.308: 9[2024-04-19 03:23:36.419155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:59.272 9.1265% ( 2) 00:09:59.272 16.308 - 16.403: 99.1412% ( 2) 00:09:59.272 16.403 - 16.498: 99.1852% ( 6) 00:09:59.272 16.498 - 16.593: 99.1925% ( 1) 00:09:59.272 16.593 - 16.687: 99.2146% ( 3) 00:09:59.272 16.687 - 16.782: 99.2366% ( 3) 00:09:59.272 16.972 - 17.067: 99.2513% ( 2) 00:09:59.272 17.067 - 17.161: 99.2659% ( 2) 00:09:59.272 17.161 - 17.256: 99.2806% ( 2) 00:09:59.272 17.256 - 17.351: 99.2953% ( 2) 00:09:59.272 17.446 - 17.541: 99.3100% ( 2) 00:09:59.272 17.541 - 17.636: 99.3247% ( 2) 00:09:59.272 17.636 - 17.730: 99.3394% ( 2) 00:09:59.272 17.920 - 18.015: 99.3540% ( 2) 00:09:59.272 18.015 - 18.110: 99.3614% ( 1) 00:09:59.272 18.110 - 18.204: 99.3687% ( 1) 00:09:59.272 18.204 - 18.299: 99.3761% ( 1) 00:09:59.272 18.584 - 18.679: 99.3907% ( 2) 00:09:59.272 18.679 - 18.773: 99.3981% ( 1) 00:09:59.272 1019.449 - 1025.517: 99.4054% ( 1) 00:09:59.272 1031.585 - 1037.653: 99.4201% ( 2) 00:09:59.272 1037.653 - 1043.721: 99.4274% ( 1) 00:09:59.273 2038.898 - 2051.034: 99.4348% ( 1) 00:09:59.273 3980.705 - 4004.978: 99.7945% ( 49) 00:09:59.273 4004.978 - 4029.250: 99.9560% ( 22) 00:09:59.273 4029.250 - 4053.523: 99.9633% ( 1) 00:09:59.273 5995.330 - 6019.603: 99.9706% ( 1) 00:09:59.273 6990.507 - 7039.052: 100.0000% ( 4) 00:09:59.273 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:59.273 [ 00:09:59.273 { 00:09:59.273 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:59.273 "subtype": "Discovery", 00:09:59.273 "listen_addresses": [], 00:09:59.273 "allow_any_host": true, 00:09:59.273 "hosts": [] 00:09:59.273 }, 00:09:59.273 { 00:09:59.273 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:59.273 "subtype": "NVMe", 00:09:59.273 "listen_addresses": [ 00:09:59.273 { 00:09:59.273 "transport": "VFIOUSER", 00:09:59.273 "trtype": "VFIOUSER", 00:09:59.273 "adrfam": "IPv4", 00:09:59.273 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:59.273 "trsvcid": "0" 00:09:59.273 } 00:09:59.273 ], 00:09:59.273 "allow_any_host": true, 00:09:59.273 "hosts": [], 00:09:59.273 "serial_number": "SPDK1", 00:09:59.273 "model_number": "SPDK bdev Controller", 00:09:59.273 "max_namespaces": 32, 00:09:59.273 "min_cntlid": 1, 00:09:59.273 "max_cntlid": 65519, 00:09:59.273 "namespaces": [ 00:09:59.273 { 00:09:59.273 "nsid": 1, 00:09:59.273 "bdev_name": "Malloc1", 00:09:59.273 "name": "Malloc1", 00:09:59.273 "nguid": "F77243D02190498396ADAD75931B4A24", 00:09:59.273 "uuid": "f77243d0-2190-4983-96ad-ad75931b4a24" 00:09:59.273 }, 00:09:59.273 { 00:09:59.273 "nsid": 2, 00:09:59.273 "bdev_name": "Malloc3", 00:09:59.273 "name": "Malloc3", 00:09:59.273 "nguid": "6C7D01CCBE534A03B557D12B72D0141E", 00:09:59.273 "uuid": "6c7d01cc-be53-4a03-b557-d12b72d0141e" 00:09:59.273 } 00:09:59.273 ] 00:09:59.273 }, 00:09:59.273 { 00:09:59.273 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:59.273 "subtype": "NVMe", 00:09:59.273 "listen_addresses": [ 00:09:59.273 { 00:09:59.273 "transport": "VFIOUSER", 00:09:59.273 "trtype": "VFIOUSER", 00:09:59.273 "adrfam": "IPv4", 00:09:59.273 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:59.273 "trsvcid": "0" 00:09:59.273 } 00:09:59.273 ], 00:09:59.273 "allow_any_host": true, 00:09:59.273 "hosts": [], 00:09:59.273 "serial_number": "SPDK2", 00:09:59.273 "model_number": "SPDK bdev Controller", 00:09:59.273 "max_namespaces": 32, 00:09:59.273 "min_cntlid": 1, 00:09:59.273 "max_cntlid": 65519, 00:09:59.273 "namespaces": [ 00:09:59.273 { 00:09:59.273 "nsid": 1, 00:09:59.273 "bdev_name": "Malloc2", 00:09:59.273 "name": "Malloc2", 00:09:59.273 "nguid": "518B087EFC58488C93DAAAD29AC99036", 00:09:59.273 "uuid": "518b087e-fc58-488c-93da-aad29ac99036" 00:09:59.273 } 00:09:59.273 ] 00:09:59.273 } 00:09:59.273 ] 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@34 -- # aerpid=203047 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:59.273 03:23:36 -- common/autotest_common.sh@1251 -- # local i=0 00:09:59.273 03:23:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:59.273 03:23:36 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:59.273 03:23:36 -- common/autotest_common.sh@1262 -- # return 0 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:59.273 03:23:36 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:09:59.273 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.531 [2024-04-19 03:23:36.900815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:59.531 Malloc4 00:09:59.531 03:23:37 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:09:59.789 [2024-04-19 03:23:37.247367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:59.789 03:23:37 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:59.789 Asynchronous Event Request test 00:09:59.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:09:59.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:09:59.789 Registering asynchronous event callbacks... 00:09:59.789 Starting namespace attribute notice tests for all controllers... 00:09:59.789 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:59.789 aer_cb - Changed Namespace 00:09:59.789 Cleaning up... 00:10:00.048 [ 00:10:00.048 { 00:10:00.048 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:00.048 "subtype": "Discovery", 00:10:00.048 "listen_addresses": [], 00:10:00.048 "allow_any_host": true, 00:10:00.048 "hosts": [] 00:10:00.048 }, 00:10:00.048 { 00:10:00.048 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:00.048 "subtype": "NVMe", 00:10:00.048 "listen_addresses": [ 00:10:00.048 { 00:10:00.048 "transport": "VFIOUSER", 00:10:00.048 "trtype": "VFIOUSER", 00:10:00.048 "adrfam": "IPv4", 00:10:00.048 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:00.048 "trsvcid": "0" 00:10:00.048 } 00:10:00.048 ], 00:10:00.048 "allow_any_host": true, 00:10:00.048 "hosts": [], 00:10:00.048 "serial_number": "SPDK1", 00:10:00.048 "model_number": "SPDK bdev Controller", 00:10:00.048 "max_namespaces": 32, 00:10:00.048 "min_cntlid": 1, 00:10:00.048 "max_cntlid": 65519, 00:10:00.048 "namespaces": [ 00:10:00.048 { 00:10:00.048 "nsid": 1, 00:10:00.048 "bdev_name": "Malloc1", 00:10:00.048 "name": "Malloc1", 00:10:00.048 "nguid": "F77243D02190498396ADAD75931B4A24", 00:10:00.048 "uuid": "f77243d0-2190-4983-96ad-ad75931b4a24" 00:10:00.048 }, 00:10:00.048 { 00:10:00.048 "nsid": 2, 00:10:00.048 "bdev_name": "Malloc3", 00:10:00.048 "name": "Malloc3", 00:10:00.048 "nguid": "6C7D01CCBE534A03B557D12B72D0141E", 00:10:00.048 "uuid": "6c7d01cc-be53-4a03-b557-d12b72d0141e" 00:10:00.048 } 00:10:00.048 ] 00:10:00.048 }, 00:10:00.048 { 00:10:00.048 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:00.048 "subtype": "NVMe", 00:10:00.048 "listen_addresses": [ 00:10:00.048 { 00:10:00.048 "transport": "VFIOUSER", 00:10:00.048 "trtype": "VFIOUSER", 00:10:00.048 "adrfam": "IPv4", 00:10:00.048 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:00.048 "trsvcid": "0" 00:10:00.048 } 00:10:00.048 ], 00:10:00.048 "allow_any_host": true, 00:10:00.048 "hosts": [], 00:10:00.048 "serial_number": "SPDK2", 00:10:00.048 "model_number": "SPDK bdev Controller", 00:10:00.048 "max_namespaces": 32, 00:10:00.048 "min_cntlid": 1, 00:10:00.048 "max_cntlid": 65519, 00:10:00.048 "namespaces": [ 00:10:00.048 { 00:10:00.048 "nsid": 1, 00:10:00.048 "bdev_name": "Malloc2", 00:10:00.048 "name": "Malloc2", 00:10:00.048 "nguid": "518B087EFC58488C93DAAAD29AC99036", 00:10:00.048 "uuid": "518b087e-fc58-488c-93da-aad29ac99036" 00:10:00.048 }, 00:10:00.048 { 00:10:00.048 "nsid": 2, 00:10:00.048 "bdev_name": "Malloc4", 00:10:00.048 "name": "Malloc4", 00:10:00.048 "nguid": "6F7094645CCA47B6944FFFF9C812BFF7", 00:10:00.048 "uuid": "6f709464-5cca-47b6-944f-fff9c812bff7" 00:10:00.048 } 00:10:00.048 ] 00:10:00.048 } 00:10:00.048 ] 00:10:00.048 03:23:37 -- target/nvmf_vfio_user.sh@44 -- # wait 203047 00:10:00.048 03:23:37 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:00.048 03:23:37 -- target/nvmf_vfio_user.sh@95 -- # killprocess 197427 00:10:00.048 03:23:37 -- common/autotest_common.sh@936 -- # '[' -z 197427 ']' 00:10:00.048 03:23:37 -- common/autotest_common.sh@940 -- # kill -0 197427 00:10:00.048 03:23:37 -- common/autotest_common.sh@941 -- # uname 00:10:00.048 03:23:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:00.048 03:23:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 197427 00:10:00.048 03:23:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:00.048 03:23:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:00.048 03:23:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 197427' 00:10:00.048 killing process with pid 197427 00:10:00.048 03:23:37 -- common/autotest_common.sh@955 -- # kill 197427 00:10:00.048 [2024-04-19 03:23:37.526712] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:00.048 03:23:37 -- common/autotest_common.sh@960 -- # wait 197427 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=203187 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 203187' 00:10:00.615 Process pid: 203187 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:00.615 03:23:37 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 203187 00:10:00.615 03:23:37 -- common/autotest_common.sh@817 -- # '[' -z 203187 ']' 00:10:00.615 03:23:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.615 03:23:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:00.615 03:23:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.615 03:23:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:00.615 03:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:00.615 [2024-04-19 03:23:37.958865] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:00.615 [2024-04-19 03:23:37.959958] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:10:00.615 [2024-04-19 03:23:37.960019] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.615 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.615 [2024-04-19 03:23:38.024205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.615 [2024-04-19 03:23:38.139408] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.615 [2024-04-19 03:23:38.139483] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.615 [2024-04-19 03:23:38.139500] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.615 [2024-04-19 03:23:38.139514] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.615 [2024-04-19 03:23:38.139526] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.615 [2024-04-19 03:23:38.139615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.615 [2024-04-19 03:23:38.139671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.615 [2024-04-19 03:23:38.139791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.615 [2024-04-19 03:23:38.139795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.873 [2024-04-19 03:23:38.237339] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:00.874 [2024-04-19 03:23:38.237583] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:00.874 [2024-04-19 03:23:38.237874] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:00.874 [2024-04-19 03:23:38.238607] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:00.874 [2024-04-19 03:23:38.238728] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:01.439 03:23:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:01.439 03:23:38 -- common/autotest_common.sh@850 -- # return 0 00:10:01.439 03:23:38 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:02.372 03:23:39 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:02.630 03:23:40 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:02.630 03:23:40 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:02.630 03:23:40 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:02.630 03:23:40 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:02.630 03:23:40 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:02.889 Malloc1 00:10:03.147 03:23:40 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:03.405 03:23:40 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:03.662 03:23:40 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:03.921 03:23:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:03.921 03:23:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:03.921 03:23:41 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:03.921 Malloc2 00:10:04.178 03:23:41 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:04.436 03:23:41 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:04.694 03:23:42 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:04.952 03:23:42 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:04.952 03:23:42 -- target/nvmf_vfio_user.sh@95 -- # killprocess 203187 00:10:04.952 03:23:42 -- common/autotest_common.sh@936 -- # '[' -z 203187 ']' 00:10:04.952 03:23:42 -- common/autotest_common.sh@940 -- # kill -0 203187 00:10:04.952 03:23:42 -- common/autotest_common.sh@941 -- # uname 00:10:04.952 03:23:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.952 03:23:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 203187 00:10:04.952 03:23:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:04.952 03:23:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:04.952 03:23:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 203187' 00:10:04.952 killing process with pid 203187 00:10:04.952 03:23:42 -- common/autotest_common.sh@955 -- # kill 203187 00:10:04.952 03:23:42 -- common/autotest_common.sh@960 -- # wait 203187 00:10:05.210 03:23:42 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:05.210 03:23:42 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:05.210 00:10:05.210 real 0m53.779s 00:10:05.210 user 3m31.796s 00:10:05.210 sys 0m4.611s 00:10:05.210 03:23:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:05.210 03:23:42 -- common/autotest_common.sh@10 -- # set +x 00:10:05.210 ************************************ 00:10:05.210 END TEST nvmf_vfio_user 00:10:05.210 ************************************ 00:10:05.210 03:23:42 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:05.210 03:23:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:05.210 03:23:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.210 03:23:42 -- common/autotest_common.sh@10 -- # set +x 00:10:05.469 ************************************ 00:10:05.469 START TEST nvmf_vfio_user_nvme_compliance 00:10:05.469 ************************************ 00:10:05.469 03:23:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:05.469 * Looking for test storage... 00:10:05.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:05.469 03:23:42 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.469 03:23:42 -- nvmf/common.sh@7 -- # uname -s 00:10:05.469 03:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.469 03:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.469 03:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.469 03:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.469 03:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.469 03:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.469 03:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.469 03:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.469 03:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.469 03:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.469 03:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.469 03:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.469 03:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.469 03:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.469 03:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.469 03:23:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.469 03:23:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.469 03:23:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.469 03:23:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.469 03:23:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.469 03:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.469 03:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.469 03:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.469 03:23:42 -- paths/export.sh@5 -- # export PATH 00:10:05.469 03:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.469 03:23:42 -- nvmf/common.sh@47 -- # : 0 00:10:05.469 03:23:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.469 03:23:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.469 03:23:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.469 03:23:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.469 03:23:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.469 03:23:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.469 03:23:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.469 03:23:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.469 03:23:42 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.469 03:23:42 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.469 03:23:42 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:05.469 03:23:42 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:05.469 03:23:42 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:05.469 03:23:42 -- compliance/compliance.sh@20 -- # nvmfpid=203808 00:10:05.469 03:23:42 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:05.469 03:23:42 -- compliance/compliance.sh@21 -- # echo 'Process pid: 203808' 00:10:05.469 Process pid: 203808 00:10:05.469 03:23:42 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:05.469 03:23:42 -- compliance/compliance.sh@24 -- # waitforlisten 203808 00:10:05.469 03:23:42 -- common/autotest_common.sh@817 -- # '[' -z 203808 ']' 00:10:05.469 03:23:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.469 03:23:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:05.469 03:23:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.469 03:23:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:05.469 03:23:42 -- common/autotest_common.sh@10 -- # set +x 00:10:05.469 [2024-04-19 03:23:42.897273] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:10:05.470 [2024-04-19 03:23:42.897357] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.470 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.470 [2024-04-19 03:23:42.960481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.729 [2024-04-19 03:23:43.080708] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.729 [2024-04-19 03:23:43.080794] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.729 [2024-04-19 03:23:43.080811] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.729 [2024-04-19 03:23:43.080824] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.729 [2024-04-19 03:23:43.080836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.729 [2024-04-19 03:23:43.080896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.729 [2024-04-19 03:23:43.080922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.729 [2024-04-19 03:23:43.080936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.294 03:23:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:06.294 03:23:43 -- common/autotest_common.sh@850 -- # return 0 00:10:06.294 03:23:43 -- compliance/compliance.sh@26 -- # sleep 1 00:10:07.667 03:23:44 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:07.667 03:23:44 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:07.667 03:23:44 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:07.667 03:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.667 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.667 03:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.667 03:23:44 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:07.667 03:23:44 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:07.667 03:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.667 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.667 malloc0 00:10:07.667 03:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.667 03:23:44 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:07.667 03:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.667 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.667 03:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.667 03:23:44 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:07.667 03:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.667 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.667 03:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.667 03:23:44 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:07.667 03:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.667 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.667 03:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.667 03:23:44 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:07.667 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.667 00:10:07.667 00:10:07.667 CUnit - A unit testing framework for C - Version 2.1-3 00:10:07.667 http://cunit.sourceforge.net/ 00:10:07.667 00:10:07.667 00:10:07.667 Suite: nvme_compliance 00:10:07.667 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-19 03:23:45.057987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:07.667 [2024-04-19 03:23:45.059475] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:07.667 [2024-04-19 03:23:45.059504] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:07.667 [2024-04-19 03:23:45.059519] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:07.667 [2024-04-19 03:23:45.061006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:07.667 passed 00:10:07.667 Test: admin_identify_ctrlr_verify_fused ...[2024-04-19 03:23:45.151675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:07.667 [2024-04-19 03:23:45.154698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:07.667 passed 00:10:07.925 Test: admin_identify_ns ...[2024-04-19 03:23:45.250186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:07.925 [2024-04-19 03:23:45.309399] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:07.925 [2024-04-19 03:23:45.317401] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:07.925 [2024-04-19 03:23:45.338542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:07.925 passed 00:10:07.925 Test: admin_get_features_mandatory_features ...[2024-04-19 03:23:45.428673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:07.925 [2024-04-19 03:23:45.431693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:07.925 passed 00:10:08.183 Test: admin_get_features_optional_features ...[2024-04-19 03:23:45.522326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.183 [2024-04-19 03:23:45.525348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.183 passed 00:10:08.183 Test: admin_set_features_number_of_queues ...[2024-04-19 03:23:45.616187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.183 [2024-04-19 03:23:45.720526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.440 passed 00:10:08.440 Test: admin_get_log_page_mandatory_logs ...[2024-04-19 03:23:45.812954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.440 [2024-04-19 03:23:45.815978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.440 passed 00:10:08.440 Test: admin_get_log_page_with_lpo ...[2024-04-19 03:23:45.906655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.440 [2024-04-19 03:23:45.974398] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:08.440 [2024-04-19 03:23:45.987491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.698 passed 00:10:08.698 Test: fabric_property_get ...[2024-04-19 03:23:46.080037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.698 [2024-04-19 03:23:46.081355] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:08.698 [2024-04-19 03:23:46.083068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.698 passed 00:10:08.698 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-19 03:23:46.175844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.698 [2024-04-19 03:23:46.177171] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:08.698 [2024-04-19 03:23:46.178875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.698 passed 00:10:08.956 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-19 03:23:46.270162] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.956 [2024-04-19 03:23:46.353400] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:08.956 [2024-04-19 03:23:46.369392] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:08.956 [2024-04-19 03:23:46.374517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.956 passed 00:10:08.956 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-19 03:23:46.463593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:08.956 [2024-04-19 03:23:46.464903] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:08.956 [2024-04-19 03:23:46.467627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:08.956 passed 00:10:09.214 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-19 03:23:46.557309] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.214 [2024-04-19 03:23:46.632394] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:09.214 [2024-04-19 03:23:46.656397] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:09.214 [2024-04-19 03:23:46.661519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.214 passed 00:10:09.214 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-19 03:23:46.754005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.214 [2024-04-19 03:23:46.755312] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:09.214 [2024-04-19 03:23:46.755357] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:09.214 [2024-04-19 03:23:46.757035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.472 passed 00:10:09.472 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-19 03:23:46.846707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.472 [2024-04-19 03:23:46.942390] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:09.472 [2024-04-19 03:23:46.950395] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:09.472 [2024-04-19 03:23:46.958391] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:09.472 [2024-04-19 03:23:46.966391] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:09.472 [2024-04-19 03:23:46.994594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.472 passed 00:10:09.730 Test: admin_create_io_sq_verify_pc ...[2024-04-19 03:23:47.083661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.730 [2024-04-19 03:23:47.099405] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:09.730 [2024-04-19 03:23:47.117136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.730 passed 00:10:09.730 Test: admin_create_io_qp_max_qps ...[2024-04-19 03:23:47.207793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:11.103 [2024-04-19 03:23:48.303400] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:11.361 [2024-04-19 03:23:48.683497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:11.361 passed 00:10:11.361 Test: admin_create_io_sq_shared_cq ...[2024-04-19 03:23:48.775514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:11.361 [2024-04-19 03:23:48.905390] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:11.625 [2024-04-19 03:23:48.941535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:11.625 passed 00:10:11.625 00:10:11.625 Run Summary: Type Total Ran Passed Failed Inactive 00:10:11.625 suites 1 1 n/a 0 0 00:10:11.625 tests 18 18 18 0 0 00:10:11.625 asserts 360 360 360 0 n/a 00:10:11.625 00:10:11.625 Elapsed time = 1.625 seconds 00:10:11.625 03:23:48 -- compliance/compliance.sh@42 -- # killprocess 203808 00:10:11.625 03:23:48 -- common/autotest_common.sh@936 -- # '[' -z 203808 ']' 00:10:11.625 03:23:48 -- common/autotest_common.sh@940 -- # kill -0 203808 00:10:11.625 03:23:48 -- common/autotest_common.sh@941 -- # uname 00:10:11.625 03:23:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.625 03:23:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 203808 00:10:11.625 03:23:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:11.625 03:23:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:11.625 03:23:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 203808' 00:10:11.625 killing process with pid 203808 00:10:11.625 03:23:49 -- common/autotest_common.sh@955 -- # kill 203808 00:10:11.625 03:23:49 -- common/autotest_common.sh@960 -- # wait 203808 00:10:11.954 03:23:49 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:11.954 03:23:49 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:11.954 00:10:11.954 real 0m6.563s 00:10:11.954 user 0m18.609s 00:10:11.954 sys 0m0.588s 00:10:11.955 03:23:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:11.955 03:23:49 -- common/autotest_common.sh@10 -- # set +x 00:10:11.955 ************************************ 00:10:11.955 END TEST nvmf_vfio_user_nvme_compliance 00:10:11.955 ************************************ 00:10:11.955 03:23:49 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:11.955 03:23:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:11.955 03:23:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.955 03:23:49 -- common/autotest_common.sh@10 -- # set +x 00:10:11.955 ************************************ 00:10:11.955 START TEST nvmf_vfio_user_fuzz 00:10:11.955 ************************************ 00:10:11.955 03:23:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:12.220 * Looking for test storage... 00:10:12.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.220 03:23:49 -- nvmf/common.sh@7 -- # uname -s 00:10:12.220 03:23:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.220 03:23:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.220 03:23:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.220 03:23:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.220 03:23:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.220 03:23:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.220 03:23:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.220 03:23:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.220 03:23:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.220 03:23:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.220 03:23:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.220 03:23:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.220 03:23:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.220 03:23:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.220 03:23:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.220 03:23:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.220 03:23:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.220 03:23:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.220 03:23:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.220 03:23:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.220 03:23:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.220 03:23:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.220 03:23:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.220 03:23:49 -- paths/export.sh@5 -- # export PATH 00:10:12.220 03:23:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.220 03:23:49 -- nvmf/common.sh@47 -- # : 0 00:10:12.220 03:23:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.220 03:23:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.220 03:23:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.220 03:23:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.220 03:23:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.220 03:23:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.220 03:23:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.220 03:23:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=204667 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 204667' 00:10:12.220 Process pid: 204667 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:12.220 03:23:49 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 204667 00:10:12.220 03:23:49 -- common/autotest_common.sh@817 -- # '[' -z 204667 ']' 00:10:12.220 03:23:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.220 03:23:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:12.220 03:23:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.220 03:23:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:12.220 03:23:49 -- common/autotest_common.sh@10 -- # set +x 00:10:12.478 03:23:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:12.478 03:23:49 -- common/autotest_common.sh@850 -- # return 0 00:10:12.478 03:23:49 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:13.411 03:23:50 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:13.411 03:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.411 03:23:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.411 03:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.411 03:23:50 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:13.411 03:23:50 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:13.411 03:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.411 03:23:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.411 malloc0 00:10:13.411 03:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.411 03:23:50 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:13.411 03:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.411 03:23:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.411 03:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.411 03:23:50 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:13.411 03:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.411 03:23:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.411 03:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.411 03:23:50 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:13.411 03:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.411 03:23:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.669 03:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.669 03:23:50 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:13.669 03:23:50 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:45.734 Fuzzing completed. Shutting down the fuzz application 00:10:45.734 00:10:45.734 Dumping successful admin opcodes: 00:10:45.734 8, 9, 10, 24, 00:10:45.734 Dumping successful io opcodes: 00:10:45.734 0, 00:10:45.734 NS: 0x200003a1ef00 I/O qp, Total commands completed: 607829, total successful commands: 2349, random_seed: 1314715008 00:10:45.734 NS: 0x200003a1ef00 admin qp, Total commands completed: 77370, total successful commands: 598, random_seed: 3604106496 00:10:45.735 03:24:22 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:45.735 03:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.735 03:24:22 -- common/autotest_common.sh@10 -- # set +x 00:10:45.735 03:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.735 03:24:22 -- target/vfio_user_fuzz.sh@46 -- # killprocess 204667 00:10:45.735 03:24:22 -- common/autotest_common.sh@936 -- # '[' -z 204667 ']' 00:10:45.735 03:24:22 -- common/autotest_common.sh@940 -- # kill -0 204667 00:10:45.735 03:24:22 -- common/autotest_common.sh@941 -- # uname 00:10:45.735 03:24:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.735 03:24:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 204667 00:10:45.735 03:24:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:45.735 03:24:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:45.735 03:24:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 204667' 00:10:45.735 killing process with pid 204667 00:10:45.735 03:24:22 -- common/autotest_common.sh@955 -- # kill 204667 00:10:45.735 03:24:22 -- common/autotest_common.sh@960 -- # wait 204667 00:10:45.735 03:24:22 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:45.735 03:24:22 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:45.735 00:10:45.735 real 0m33.379s 00:10:45.735 user 0m33.512s 00:10:45.735 sys 0m30.153s 00:10:45.735 03:24:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:45.735 03:24:22 -- common/autotest_common.sh@10 -- # set +x 00:10:45.735 ************************************ 00:10:45.735 END TEST nvmf_vfio_user_fuzz 00:10:45.735 ************************************ 00:10:45.735 03:24:22 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:45.735 03:24:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:45.735 03:24:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.735 03:24:22 -- common/autotest_common.sh@10 -- # set +x 00:10:45.735 ************************************ 00:10:45.735 START TEST nvmf_host_management 00:10:45.735 ************************************ 00:10:45.735 03:24:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:45.735 * Looking for test storage... 00:10:45.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.735 03:24:23 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.735 03:24:23 -- nvmf/common.sh@7 -- # uname -s 00:10:45.735 03:24:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.735 03:24:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.735 03:24:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.735 03:24:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.735 03:24:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.735 03:24:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.735 03:24:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.735 03:24:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.735 03:24:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.735 03:24:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.735 03:24:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.735 03:24:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.735 03:24:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.735 03:24:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.735 03:24:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.735 03:24:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.735 03:24:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.735 03:24:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.735 03:24:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.735 03:24:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.735 03:24:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.735 03:24:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.735 03:24:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.735 03:24:23 -- paths/export.sh@5 -- # export PATH 00:10:45.735 03:24:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.735 03:24:23 -- nvmf/common.sh@47 -- # : 0 00:10:45.735 03:24:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:45.735 03:24:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:45.735 03:24:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.735 03:24:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.735 03:24:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.735 03:24:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:45.735 03:24:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:45.735 03:24:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:45.735 03:24:23 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.735 03:24:23 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.736 03:24:23 -- target/host_management.sh@105 -- # nvmftestinit 00:10:45.736 03:24:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:45.736 03:24:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.736 03:24:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:45.736 03:24:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:45.736 03:24:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:45.736 03:24:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.736 03:24:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.736 03:24:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.736 03:24:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:45.736 03:24:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:45.736 03:24:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:45.736 03:24:23 -- common/autotest_common.sh@10 -- # set +x 00:10:47.638 03:24:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:47.638 03:24:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.639 03:24:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.639 03:24:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.639 03:24:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.639 03:24:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.639 03:24:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.639 03:24:24 -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.639 03:24:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.639 03:24:24 -- nvmf/common.sh@296 -- # e810=() 00:10:47.639 03:24:24 -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.639 03:24:24 -- nvmf/common.sh@297 -- # x722=() 00:10:47.639 03:24:24 -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.639 03:24:24 -- nvmf/common.sh@298 -- # mlx=() 00:10:47.639 03:24:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.639 03:24:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.639 03:24:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.639 03:24:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.639 03:24:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.639 03:24:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.639 03:24:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:47.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:47.639 03:24:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.639 03:24:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:47.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:47.639 03:24:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.639 03:24:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.639 03:24:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.639 03:24:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:47.639 03:24:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.639 03:24:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:47.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:47.639 03:24:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.639 03:24:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.639 03:24:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.639 03:24:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:47.639 03:24:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.639 03:24:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:47.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:47.639 03:24:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.639 03:24:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:47.639 03:24:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:47.639 03:24:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:47.639 03:24:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:47.639 03:24:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.639 03:24:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.639 03:24:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.639 03:24:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.639 03:24:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.639 03:24:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.639 03:24:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.639 03:24:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.639 03:24:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.639 03:24:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.639 03:24:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.639 03:24:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.639 03:24:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.639 03:24:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.639 03:24:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.639 03:24:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.639 03:24:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.639 03:24:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.639 03:24:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.639 03:24:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:10:47.639 00:10:47.639 --- 10.0.0.2 ping statistics --- 00:10:47.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.639 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:10:47.639 03:24:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:10:47.639 00:10:47.639 --- 10.0.0.1 ping statistics --- 00:10:47.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.639 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:47.639 03:24:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.639 03:24:25 -- nvmf/common.sh@411 -- # return 0 00:10:47.639 03:24:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:47.639 03:24:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.639 03:24:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:47.639 03:24:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:47.639 03:24:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.639 03:24:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:47.639 03:24:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:47.639 03:24:25 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:10:47.639 03:24:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.639 03:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.639 03:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:47.639 ************************************ 00:10:47.639 START TEST nvmf_host_management 00:10:47.639 ************************************ 00:10:47.639 03:24:25 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:47.639 03:24:25 -- target/host_management.sh@69 -- # starttarget 00:10:47.639 03:24:25 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:47.639 03:24:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:47.639 03:24:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:47.639 03:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:47.639 03:24:25 -- nvmf/common.sh@470 -- # nvmfpid=210888 00:10:47.639 03:24:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:47.639 03:24:25 -- nvmf/common.sh@471 -- # waitforlisten 210888 00:10:47.639 03:24:25 -- common/autotest_common.sh@817 -- # '[' -z 210888 ']' 00:10:47.639 03:24:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.639 03:24:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:47.639 03:24:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.639 03:24:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:47.639 03:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:47.899 [2024-04-19 03:24:25.222045] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:10:47.899 [2024-04-19 03:24:25.222137] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.899 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.899 [2024-04-19 03:24:25.292000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.899 [2024-04-19 03:24:25.408640] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.899 [2024-04-19 03:24:25.408729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.899 [2024-04-19 03:24:25.408745] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.899 [2024-04-19 03:24:25.408758] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.899 [2024-04-19 03:24:25.408770] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.899 [2024-04-19 03:24:25.408887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.899 [2024-04-19 03:24:25.408983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.899 [2024-04-19 03:24:25.409050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.899 [2024-04-19 03:24:25.409048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.832 03:24:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:48.832 03:24:26 -- common/autotest_common.sh@850 -- # return 0 00:10:48.832 03:24:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:48.832 03:24:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:48.832 03:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 03:24:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.832 03:24:26 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.832 03:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.832 03:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 [2024-04-19 03:24:26.183026] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.832 03:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.832 03:24:26 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:48.832 03:24:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:48.832 03:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 03:24:26 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:48.832 03:24:26 -- target/host_management.sh@23 -- # cat 00:10:48.832 03:24:26 -- target/host_management.sh@30 -- # rpc_cmd 00:10:48.832 03:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.832 03:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 Malloc0 00:10:48.832 [2024-04-19 03:24:26.244124] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.832 03:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.832 03:24:26 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:48.832 03:24:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:48.832 03:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 03:24:26 -- target/host_management.sh@73 -- # perfpid=211063 00:10:48.832 03:24:26 -- target/host_management.sh@74 -- # waitforlisten 211063 /var/tmp/bdevperf.sock 00:10:48.832 03:24:26 -- common/autotest_common.sh@817 -- # '[' -z 211063 ']' 00:10:48.832 03:24:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.832 03:24:26 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:48.832 03:24:26 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:48.832 03:24:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:48.832 03:24:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.832 03:24:26 -- nvmf/common.sh@521 -- # config=() 00:10:48.832 03:24:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:48.832 03:24:26 -- nvmf/common.sh@521 -- # local subsystem config 00:10:48.832 03:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 03:24:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:48.832 03:24:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:48.832 { 00:10:48.832 "params": { 00:10:48.832 "name": "Nvme$subsystem", 00:10:48.832 "trtype": "$TEST_TRANSPORT", 00:10:48.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.832 "adrfam": "ipv4", 00:10:48.832 "trsvcid": "$NVMF_PORT", 00:10:48.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.832 "hdgst": ${hdgst:-false}, 00:10:48.832 "ddgst": ${ddgst:-false} 00:10:48.832 }, 00:10:48.832 "method": "bdev_nvme_attach_controller" 00:10:48.832 } 00:10:48.832 EOF 00:10:48.832 )") 00:10:48.832 03:24:26 -- nvmf/common.sh@543 -- # cat 00:10:48.832 03:24:26 -- nvmf/common.sh@545 -- # jq . 00:10:48.832 03:24:26 -- nvmf/common.sh@546 -- # IFS=, 00:10:48.832 03:24:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:48.832 "params": { 00:10:48.832 "name": "Nvme0", 00:10:48.832 "trtype": "tcp", 00:10:48.832 "traddr": "10.0.0.2", 00:10:48.832 "adrfam": "ipv4", 00:10:48.832 "trsvcid": "4420", 00:10:48.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:48.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:48.832 "hdgst": false, 00:10:48.832 "ddgst": false 00:10:48.832 }, 00:10:48.832 "method": "bdev_nvme_attach_controller" 00:10:48.832 }' 00:10:48.832 [2024-04-19 03:24:26.322193] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:10:48.833 [2024-04-19 03:24:26.322282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211063 ] 00:10:48.833 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.833 [2024-04-19 03:24:26.383172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.090 [2024-04-19 03:24:26.492956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.347 Running I/O for 10 seconds... 00:10:49.913 03:24:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:49.913 03:24:27 -- common/autotest_common.sh@850 -- # return 0 00:10:49.913 03:24:27 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:49.913 03:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.913 03:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.913 03:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.913 03:24:27 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.913 03:24:27 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:49.913 03:24:27 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:49.913 03:24:27 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:49.914 03:24:27 -- target/host_management.sh@52 -- # local ret=1 00:10:49.914 03:24:27 -- target/host_management.sh@53 -- # local i 00:10:49.914 03:24:27 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:49.914 03:24:27 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:49.914 03:24:27 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:49.914 03:24:27 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:49.914 03:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.914 03:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.914 03:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.914 03:24:27 -- target/host_management.sh@55 -- # read_io_count=643 00:10:49.914 03:24:27 -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:10:49.914 03:24:27 -- target/host_management.sh@59 -- # ret=0 00:10:49.914 03:24:27 -- target/host_management.sh@60 -- # break 00:10:49.914 03:24:27 -- target/host_management.sh@64 -- # return 0 00:10:49.914 03:24:27 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:49.914 03:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.914 03:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.914 [2024-04-19 03:24:27.342731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.914 [2024-04-19 03:24:27.342819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.342837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.914 [2024-04-19 03:24:27.342851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.342864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.914 [2024-04-19 03:24:27.342877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.342892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.914 [2024-04-19 03:24:27.342905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.342918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107a170 is same with the state(5) to be set 00:10:49.914 [2024-04-19 03:24:27.343809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.343835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.343860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.343876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.343892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.343906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.343921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.343935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.343950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.343964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.343979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.343993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 03:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.914 [2024-04-19 03:24:27.344122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 03:24:27 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:49.914 [2024-04-19 03:24:27.344286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 03:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.914 [2024-04-19 03:24:27.344434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 03:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.914 [2024-04-19 03:24:27.344552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.344978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.344992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.914 [2024-04-19 03:24:27.345729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.914 [2024-04-19 03:24:27.345742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.915 [2024-04-19 03:24:27.345757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:49.915 [2024-04-19 03:24:27.345770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.915 [2024-04-19 03:24:27.345857] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x148a5d0 was disconnected and freed. reset controller. 00:10:49.915 [2024-04-19 03:24:27.346996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:49.915 task offset: 90112 on job bdev=Nvme0n1 fails 00:10:49.915 00:10:49.915 Latency(us) 00:10:49.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.915 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:49.915 Job: Nvme0n1 ended in about 0.55 seconds with error 00:10:49.915 Verification LBA range: start 0x0 length 0x400 00:10:49.915 Nvme0n1 : 0.55 1289.69 80.61 117.24 0.00 44502.88 2487.94 39030.33 00:10:49.915 =================================================================================================================== 00:10:49.915 Total : 1289.69 80.61 117.24 0.00 44502.88 2487.94 39030.33 00:10:49.915 [2024-04-19 03:24:27.348874] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:49.915 [2024-04-19 03:24:27.348902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107a170 (9): Bad file descriptor 00:10:49.915 03:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.915 03:24:27 -- target/host_management.sh@87 -- # sleep 1 00:10:49.915 [2024-04-19 03:24:27.402077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:50.846 03:24:28 -- target/host_management.sh@91 -- # kill -9 211063 00:10:50.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (211063) - No such process 00:10:50.846 03:24:28 -- target/host_management.sh@91 -- # true 00:10:50.846 03:24:28 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:50.846 03:24:28 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:50.847 03:24:28 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:50.847 03:24:28 -- nvmf/common.sh@521 -- # config=() 00:10:50.847 03:24:28 -- nvmf/common.sh@521 -- # local subsystem config 00:10:50.847 03:24:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:50.847 03:24:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:50.847 { 00:10:50.847 "params": { 00:10:50.847 "name": "Nvme$subsystem", 00:10:50.847 "trtype": "$TEST_TRANSPORT", 00:10:50.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.847 "adrfam": "ipv4", 00:10:50.847 "trsvcid": "$NVMF_PORT", 00:10:50.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.847 "hdgst": ${hdgst:-false}, 00:10:50.847 "ddgst": ${ddgst:-false} 00:10:50.847 }, 00:10:50.847 "method": "bdev_nvme_attach_controller" 00:10:50.847 } 00:10:50.847 EOF 00:10:50.847 )") 00:10:50.847 03:24:28 -- nvmf/common.sh@543 -- # cat 00:10:50.847 03:24:28 -- nvmf/common.sh@545 -- # jq . 00:10:50.847 03:24:28 -- nvmf/common.sh@546 -- # IFS=, 00:10:50.847 03:24:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:50.847 "params": { 00:10:50.847 "name": "Nvme0", 00:10:50.847 "trtype": "tcp", 00:10:50.847 "traddr": "10.0.0.2", 00:10:50.847 "adrfam": "ipv4", 00:10:50.847 "trsvcid": "4420", 00:10:50.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:50.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:50.847 "hdgst": false, 00:10:50.847 "ddgst": false 00:10:50.847 }, 00:10:50.847 "method": "bdev_nvme_attach_controller" 00:10:50.847 }' 00:10:50.847 [2024-04-19 03:24:28.396797] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:10:50.847 [2024-04-19 03:24:28.396879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211341 ] 00:10:51.105 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.105 [2024-04-19 03:24:28.458530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.105 [2024-04-19 03:24:28.568207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.363 Running I/O for 1 seconds... 00:10:52.740 00:10:52.740 Latency(us) 00:10:52.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.740 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:52.740 Verification LBA range: start 0x0 length 0x400 00:10:52.740 Nvme0n1 : 1.02 1572.56 98.29 0.00 0.00 39980.00 6456.51 37088.52 00:10:52.740 =================================================================================================================== 00:10:52.740 Total : 1572.56 98.29 0.00 0.00 39980.00 6456.51 37088.52 00:10:52.740 03:24:30 -- target/host_management.sh@102 -- # stoptarget 00:10:52.740 03:24:30 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:52.740 03:24:30 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:52.740 03:24:30 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:52.740 03:24:30 -- target/host_management.sh@40 -- # nvmftestfini 00:10:52.740 03:24:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:52.740 03:24:30 -- nvmf/common.sh@117 -- # sync 00:10:52.740 03:24:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.740 03:24:30 -- nvmf/common.sh@120 -- # set +e 00:10:52.740 03:24:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.740 03:24:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.740 rmmod nvme_tcp 00:10:52.740 rmmod nvme_fabrics 00:10:52.740 rmmod nvme_keyring 00:10:52.740 03:24:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.740 03:24:30 -- nvmf/common.sh@124 -- # set -e 00:10:52.740 03:24:30 -- nvmf/common.sh@125 -- # return 0 00:10:52.740 03:24:30 -- nvmf/common.sh@478 -- # '[' -n 210888 ']' 00:10:52.740 03:24:30 -- nvmf/common.sh@479 -- # killprocess 210888 00:10:52.740 03:24:30 -- common/autotest_common.sh@936 -- # '[' -z 210888 ']' 00:10:52.740 03:24:30 -- common/autotest_common.sh@940 -- # kill -0 210888 00:10:52.740 03:24:30 -- common/autotest_common.sh@941 -- # uname 00:10:52.740 03:24:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.740 03:24:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 210888 00:10:52.740 03:24:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:52.740 03:24:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:52.740 03:24:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 210888' 00:10:52.740 killing process with pid 210888 00:10:52.740 03:24:30 -- common/autotest_common.sh@955 -- # kill 210888 00:10:52.740 03:24:30 -- common/autotest_common.sh@960 -- # wait 210888 00:10:52.999 [2024-04-19 03:24:30.552335] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:53.259 03:24:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:53.259 03:24:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:53.259 03:24:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:53.259 03:24:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.259 03:24:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.259 03:24:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.259 03:24:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.259 03:24:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.168 03:24:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.168 00:10:55.168 real 0m7.447s 00:10:55.168 user 0m23.288s 00:10:55.168 sys 0m1.362s 00:10:55.168 03:24:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.168 03:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:55.168 ************************************ 00:10:55.168 END TEST nvmf_host_management 00:10:55.168 ************************************ 00:10:55.168 03:24:32 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:55.168 00:10:55.168 real 0m9.672s 00:10:55.168 user 0m24.074s 00:10:55.168 sys 0m2.811s 00:10:55.168 03:24:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.168 03:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:55.168 ************************************ 00:10:55.168 END TEST nvmf_host_management 00:10:55.168 ************************************ 00:10:55.168 03:24:32 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:55.168 03:24:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:55.168 03:24:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.168 03:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:55.428 ************************************ 00:10:55.428 START TEST nvmf_lvol 00:10:55.428 ************************************ 00:10:55.428 03:24:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:55.428 * Looking for test storage... 00:10:55.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.428 03:24:32 -- nvmf/common.sh@7 -- # uname -s 00:10:55.428 03:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.428 03:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.428 03:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.428 03:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.428 03:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.428 03:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.428 03:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.428 03:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.428 03:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.428 03:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.428 03:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.428 03:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.428 03:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.428 03:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.428 03:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.428 03:24:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.428 03:24:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.428 03:24:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.428 03:24:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.428 03:24:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.428 03:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.428 03:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.428 03:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.428 03:24:32 -- paths/export.sh@5 -- # export PATH 00:10:55.428 03:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.428 03:24:32 -- nvmf/common.sh@47 -- # : 0 00:10:55.428 03:24:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.428 03:24:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.428 03:24:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.428 03:24:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.428 03:24:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.428 03:24:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.428 03:24:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.428 03:24:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.428 03:24:32 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:55.428 03:24:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:55.428 03:24:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.428 03:24:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:55.428 03:24:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:55.428 03:24:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:55.428 03:24:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.428 03:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.428 03:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.428 03:24:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:55.428 03:24:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:55.428 03:24:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.428 03:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:57.374 03:24:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:57.374 03:24:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.374 03:24:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.374 03:24:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.374 03:24:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.374 03:24:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.374 03:24:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.374 03:24:34 -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.375 03:24:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.375 03:24:34 -- nvmf/common.sh@296 -- # e810=() 00:10:57.375 03:24:34 -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.375 03:24:34 -- nvmf/common.sh@297 -- # x722=() 00:10:57.375 03:24:34 -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.375 03:24:34 -- nvmf/common.sh@298 -- # mlx=() 00:10:57.375 03:24:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.375 03:24:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.375 03:24:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.375 03:24:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.375 03:24:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.375 03:24:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.375 03:24:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.375 03:24:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.375 03:24:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.375 03:24:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.375 03:24:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.375 03:24:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.375 03:24:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:57.375 03:24:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.375 03:24:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.375 03:24:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.375 03:24:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.375 03:24:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.375 03:24:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:57.375 03:24:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.375 03:24:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.375 03:24:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.375 03:24:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:57.375 03:24:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:57.375 03:24:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:57.375 03:24:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:57.375 03:24:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.375 03:24:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.375 03:24:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.375 03:24:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.375 03:24:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.375 03:24:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.375 03:24:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.375 03:24:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.375 03:24:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.375 03:24:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.375 03:24:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.375 03:24:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.375 03:24:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.375 03:24:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.375 03:24:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.375 03:24:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:57.375 03:24:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.375 03:24:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.375 03:24:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.634 03:24:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:57.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:10:57.634 00:10:57.634 --- 10.0.0.2 ping statistics --- 00:10:57.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.634 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:57.634 03:24:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:10:57.634 00:10:57.634 --- 10.0.0.1 ping statistics --- 00:10:57.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.634 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:57.634 03:24:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.634 03:24:34 -- nvmf/common.sh@411 -- # return 0 00:10:57.634 03:24:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:57.634 03:24:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.634 03:24:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:57.634 03:24:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:57.634 03:24:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.634 03:24:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:57.634 03:24:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:57.634 03:24:34 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:57.634 03:24:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:57.634 03:24:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:57.634 03:24:34 -- common/autotest_common.sh@10 -- # set +x 00:10:57.634 03:24:34 -- nvmf/common.sh@470 -- # nvmfpid=213549 00:10:57.634 03:24:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:57.634 03:24:34 -- nvmf/common.sh@471 -- # waitforlisten 213549 00:10:57.634 03:24:34 -- common/autotest_common.sh@817 -- # '[' -z 213549 ']' 00:10:57.634 03:24:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.634 03:24:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.634 03:24:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.634 03:24:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.634 03:24:34 -- common/autotest_common.sh@10 -- # set +x 00:10:57.634 [2024-04-19 03:24:35.012257] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:10:57.634 [2024-04-19 03:24:35.012340] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.634 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.634 [2024-04-19 03:24:35.079451] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.892 [2024-04-19 03:24:35.194061] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.892 [2024-04-19 03:24:35.194111] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.892 [2024-04-19 03:24:35.194127] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.892 [2024-04-19 03:24:35.194139] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.892 [2024-04-19 03:24:35.194149] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.892 [2024-04-19 03:24:35.194232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.892 [2024-04-19 03:24:35.194284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.892 [2024-04-19 03:24:35.194287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.892 03:24:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:57.892 03:24:35 -- common/autotest_common.sh@850 -- # return 0 00:10:57.892 03:24:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:57.892 03:24:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:57.892 03:24:35 -- common/autotest_common.sh@10 -- # set +x 00:10:57.892 03:24:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.892 03:24:35 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.150 [2024-04-19 03:24:35.547571] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.150 03:24:35 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.408 03:24:35 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:58.408 03:24:35 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.666 03:24:36 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:58.666 03:24:36 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:58.924 03:24:36 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:59.181 03:24:36 -- target/nvmf_lvol.sh@29 -- # lvs=024bd9d2-a7f8-4030-85ec-ace70219668a 00:10:59.181 03:24:36 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 024bd9d2-a7f8-4030-85ec-ace70219668a lvol 20 00:10:59.439 03:24:36 -- target/nvmf_lvol.sh@32 -- # lvol=550b8550-fe16-4865-9f67-eed7cf2b9370 00:10:59.439 03:24:36 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:59.697 03:24:37 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 550b8550-fe16-4865-9f67-eed7cf2b9370 00:10:59.954 03:24:37 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:00.212 [2024-04-19 03:24:37.576593] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.212 03:24:37 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:00.476 03:24:37 -- target/nvmf_lvol.sh@42 -- # perf_pid=213869 00:11:00.476 03:24:37 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:00.476 03:24:37 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:00.476 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.409 03:24:38 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 550b8550-fe16-4865-9f67-eed7cf2b9370 MY_SNAPSHOT 00:11:01.667 03:24:39 -- target/nvmf_lvol.sh@47 -- # snapshot=d5c9ad59-dd6f-444d-9931-141830da33a6 00:11:01.667 03:24:39 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 550b8550-fe16-4865-9f67-eed7cf2b9370 30 00:11:01.925 03:24:39 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d5c9ad59-dd6f-444d-9931-141830da33a6 MY_CLONE 00:11:02.183 03:24:39 -- target/nvmf_lvol.sh@49 -- # clone=b7569ea5-ab84-4535-bf13-c2c2fc8ecaec 00:11:02.183 03:24:39 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b7569ea5-ab84-4535-bf13-c2c2fc8ecaec 00:11:03.121 03:24:40 -- target/nvmf_lvol.sh@53 -- # wait 213869 00:11:11.242 Initializing NVMe Controllers 00:11:11.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:11.243 Controller IO queue size 128, less than required. 00:11:11.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:11.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:11.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:11.243 Initialization complete. Launching workers. 00:11:11.243 ======================================================== 00:11:11.243 Latency(us) 00:11:11.243 Device Information : IOPS MiB/s Average min max 00:11:11.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10602.50 41.42 12081.14 477.18 83715.13 00:11:11.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10547.80 41.20 12138.72 1900.84 68385.19 00:11:11.243 ======================================================== 00:11:11.243 Total : 21150.30 82.62 12109.86 477.18 83715.13 00:11:11.243 00:11:11.243 03:24:48 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:11.243 03:24:48 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 550b8550-fe16-4865-9f67-eed7cf2b9370 00:11:11.500 03:24:48 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 024bd9d2-a7f8-4030-85ec-ace70219668a 00:11:11.760 03:24:49 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:11.760 03:24:49 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:11.760 03:24:49 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:11.760 03:24:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:11.760 03:24:49 -- nvmf/common.sh@117 -- # sync 00:11:11.760 03:24:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.760 03:24:49 -- nvmf/common.sh@120 -- # set +e 00:11:11.760 03:24:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.760 03:24:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.760 rmmod nvme_tcp 00:11:11.760 rmmod nvme_fabrics 00:11:11.760 rmmod nvme_keyring 00:11:11.760 03:24:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.760 03:24:49 -- nvmf/common.sh@124 -- # set -e 00:11:11.760 03:24:49 -- nvmf/common.sh@125 -- # return 0 00:11:11.760 03:24:49 -- nvmf/common.sh@478 -- # '[' -n 213549 ']' 00:11:11.760 03:24:49 -- nvmf/common.sh@479 -- # killprocess 213549 00:11:11.760 03:24:49 -- common/autotest_common.sh@936 -- # '[' -z 213549 ']' 00:11:11.760 03:24:49 -- common/autotest_common.sh@940 -- # kill -0 213549 00:11:11.760 03:24:49 -- common/autotest_common.sh@941 -- # uname 00:11:11.760 03:24:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:11.760 03:24:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 213549 00:11:11.760 03:24:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:11.760 03:24:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:11.760 03:24:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 213549' 00:11:11.760 killing process with pid 213549 00:11:11.760 03:24:49 -- common/autotest_common.sh@955 -- # kill 213549 00:11:11.760 03:24:49 -- common/autotest_common.sh@960 -- # wait 213549 00:11:12.021 03:24:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:12.021 03:24:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:12.021 03:24:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:12.021 03:24:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.021 03:24:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.021 03:24:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.021 03:24:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.021 03:24:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.561 03:24:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.561 00:11:14.561 real 0m18.811s 00:11:14.561 user 1m3.835s 00:11:14.561 sys 0m5.573s 00:11:14.561 03:24:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.561 03:24:51 -- common/autotest_common.sh@10 -- # set +x 00:11:14.561 ************************************ 00:11:14.561 END TEST nvmf_lvol 00:11:14.561 ************************************ 00:11:14.561 03:24:51 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:14.561 03:24:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:14.561 03:24:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.561 03:24:51 -- common/autotest_common.sh@10 -- # set +x 00:11:14.561 ************************************ 00:11:14.561 START TEST nvmf_lvs_grow 00:11:14.561 ************************************ 00:11:14.561 03:24:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:14.561 * Looking for test storage... 00:11:14.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.561 03:24:51 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.561 03:24:51 -- nvmf/common.sh@7 -- # uname -s 00:11:14.561 03:24:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.561 03:24:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.561 03:24:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.561 03:24:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.561 03:24:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.561 03:24:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.561 03:24:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.561 03:24:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.561 03:24:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.561 03:24:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.561 03:24:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.561 03:24:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.561 03:24:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.561 03:24:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.561 03:24:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.561 03:24:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.561 03:24:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.561 03:24:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.561 03:24:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.561 03:24:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.561 03:24:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.561 03:24:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.561 03:24:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.561 03:24:51 -- paths/export.sh@5 -- # export PATH 00:11:14.561 03:24:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.561 03:24:51 -- nvmf/common.sh@47 -- # : 0 00:11:14.561 03:24:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.561 03:24:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.561 03:24:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.561 03:24:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.561 03:24:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.561 03:24:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.561 03:24:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.561 03:24:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.561 03:24:51 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.561 03:24:51 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.561 03:24:51 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:14.561 03:24:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:14.561 03:24:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.561 03:24:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:14.561 03:24:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:14.561 03:24:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:14.561 03:24:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.561 03:24:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.561 03:24:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.561 03:24:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:14.561 03:24:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:14.561 03:24:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.561 03:24:51 -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 03:24:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:16.469 03:24:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.469 03:24:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.469 03:24:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.469 03:24:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.469 03:24:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.469 03:24:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.469 03:24:53 -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.469 03:24:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.469 03:24:53 -- nvmf/common.sh@296 -- # e810=() 00:11:16.469 03:24:53 -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.469 03:24:53 -- nvmf/common.sh@297 -- # x722=() 00:11:16.469 03:24:53 -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.469 03:24:53 -- nvmf/common.sh@298 -- # mlx=() 00:11:16.469 03:24:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.469 03:24:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.469 03:24:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.469 03:24:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:16.469 03:24:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.469 03:24:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.469 03:24:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:16.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:16.469 03:24:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.469 03:24:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:16.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:16.469 03:24:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.469 03:24:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.469 03:24:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.469 03:24:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:16.469 03:24:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.469 03:24:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:16.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:16.469 03:24:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.469 03:24:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.469 03:24:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.469 03:24:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:16.469 03:24:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.469 03:24:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:16.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:16.469 03:24:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.469 03:24:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:16.469 03:24:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:16.469 03:24:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:16.469 03:24:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.469 03:24:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.469 03:24:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.469 03:24:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:16.469 03:24:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.469 03:24:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.469 03:24:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:16.469 03:24:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.469 03:24:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.469 03:24:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:16.469 03:24:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:16.469 03:24:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.469 03:24:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.469 03:24:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.469 03:24:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.469 03:24:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:16.469 03:24:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.469 03:24:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.469 03:24:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.469 03:24:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:16.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:16.469 00:11:16.469 --- 10.0.0.2 ping statistics --- 00:11:16.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.469 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:16.469 03:24:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:16.469 00:11:16.469 --- 10.0.0.1 ping statistics --- 00:11:16.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.469 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:16.469 03:24:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.469 03:24:53 -- nvmf/common.sh@411 -- # return 0 00:11:16.469 03:24:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:16.469 03:24:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.469 03:24:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:16.469 03:24:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.469 03:24:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:16.469 03:24:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:16.469 03:24:53 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:16.469 03:24:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:16.469 03:24:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:16.470 03:24:53 -- common/autotest_common.sh@10 -- # set +x 00:11:16.470 03:24:53 -- nvmf/common.sh@470 -- # nvmfpid=217137 00:11:16.470 03:24:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:16.470 03:24:53 -- nvmf/common.sh@471 -- # waitforlisten 217137 00:11:16.470 03:24:53 -- common/autotest_common.sh@817 -- # '[' -z 217137 ']' 00:11:16.470 03:24:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.470 03:24:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:16.470 03:24:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.470 03:24:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:16.470 03:24:53 -- common/autotest_common.sh@10 -- # set +x 00:11:16.470 [2024-04-19 03:24:53.884016] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:16.470 [2024-04-19 03:24:53.884095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.470 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.470 [2024-04-19 03:24:53.949377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.728 [2024-04-19 03:24:54.059385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.728 [2024-04-19 03:24:54.059467] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.728 [2024-04-19 03:24:54.059481] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.728 [2024-04-19 03:24:54.059491] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.728 [2024-04-19 03:24:54.059508] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.728 [2024-04-19 03:24:54.059535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.728 03:24:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:16.728 03:24:54 -- common/autotest_common.sh@850 -- # return 0 00:11:16.728 03:24:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:16.728 03:24:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:16.728 03:24:54 -- common/autotest_common.sh@10 -- # set +x 00:11:16.728 03:24:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.728 03:24:54 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:16.986 [2024-04-19 03:24:54.441161] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.986 03:24:54 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:16.986 03:24:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.986 03:24:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.986 03:24:54 -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 ************************************ 00:11:17.245 START TEST lvs_grow_clean 00:11:17.245 ************************************ 00:11:17.245 03:24:54 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:17.245 03:24:54 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:17.502 03:24:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:17.502 03:24:54 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:17.760 03:24:55 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5bef148-7929-45de-9a2d-4f50c105a168 00:11:17.760 03:24:55 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:17.760 03:24:55 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:18.020 03:24:55 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:18.020 03:24:55 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:18.020 03:24:55 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5bef148-7929-45de-9a2d-4f50c105a168 lvol 150 00:11:18.277 03:24:55 -- target/nvmf_lvs_grow.sh@33 -- # lvol=497bee97-0b08-4ae0-86b5-31c230f05bdf 00:11:18.277 03:24:55 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:18.278 03:24:55 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:18.278 [2024-04-19 03:24:55.822443] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:18.278 [2024-04-19 03:24:55.822520] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:18.278 true 00:11:18.535 03:24:55 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:18.535 03:24:55 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:18.535 03:24:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:18.535 03:24:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:19.130 03:24:56 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 497bee97-0b08-4ae0-86b5-31c230f05bdf 00:11:19.130 03:24:56 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:19.389 [2024-04-19 03:24:56.910007] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.389 03:24:56 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.647 03:24:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=217581 00:11:19.647 03:24:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:19.647 03:24:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:19.647 03:24:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 217581 /var/tmp/bdevperf.sock 00:11:19.647 03:24:57 -- common/autotest_common.sh@817 -- # '[' -z 217581 ']' 00:11:19.647 03:24:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:19.647 03:24:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:19.647 03:24:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:19.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:19.647 03:24:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:19.647 03:24:57 -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 [2024-04-19 03:24:57.211394] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:19.906 [2024-04-19 03:24:57.211469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217581 ] 00:11:19.906 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.906 [2024-04-19 03:24:57.274755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.906 [2024-04-19 03:24:57.384594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.165 03:24:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:20.165 03:24:57 -- common/autotest_common.sh@850 -- # return 0 00:11:20.165 03:24:57 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:20.423 Nvme0n1 00:11:20.423 03:24:57 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:20.682 [ 00:11:20.682 { 00:11:20.682 "name": "Nvme0n1", 00:11:20.682 "aliases": [ 00:11:20.682 "497bee97-0b08-4ae0-86b5-31c230f05bdf" 00:11:20.682 ], 00:11:20.682 "product_name": "NVMe disk", 00:11:20.682 "block_size": 4096, 00:11:20.682 "num_blocks": 38912, 00:11:20.682 "uuid": "497bee97-0b08-4ae0-86b5-31c230f05bdf", 00:11:20.682 "assigned_rate_limits": { 00:11:20.682 "rw_ios_per_sec": 0, 00:11:20.682 "rw_mbytes_per_sec": 0, 00:11:20.682 "r_mbytes_per_sec": 0, 00:11:20.682 "w_mbytes_per_sec": 0 00:11:20.682 }, 00:11:20.682 "claimed": false, 00:11:20.682 "zoned": false, 00:11:20.682 "supported_io_types": { 00:11:20.682 "read": true, 00:11:20.682 "write": true, 00:11:20.682 "unmap": true, 00:11:20.682 "write_zeroes": true, 00:11:20.682 "flush": true, 00:11:20.682 "reset": true, 00:11:20.682 "compare": true, 00:11:20.682 "compare_and_write": true, 00:11:20.682 "abort": true, 00:11:20.682 "nvme_admin": true, 00:11:20.682 "nvme_io": true 00:11:20.682 }, 00:11:20.682 "memory_domains": [ 00:11:20.682 { 00:11:20.682 "dma_device_id": "system", 00:11:20.682 "dma_device_type": 1 00:11:20.682 } 00:11:20.682 ], 00:11:20.682 "driver_specific": { 00:11:20.682 "nvme": [ 00:11:20.682 { 00:11:20.682 "trid": { 00:11:20.682 "trtype": "TCP", 00:11:20.682 "adrfam": "IPv4", 00:11:20.682 "traddr": "10.0.0.2", 00:11:20.682 "trsvcid": "4420", 00:11:20.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:20.682 }, 00:11:20.682 "ctrlr_data": { 00:11:20.682 "cntlid": 1, 00:11:20.682 "vendor_id": "0x8086", 00:11:20.682 "model_number": "SPDK bdev Controller", 00:11:20.682 "serial_number": "SPDK0", 00:11:20.682 "firmware_revision": "24.05", 00:11:20.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:20.682 "oacs": { 00:11:20.682 "security": 0, 00:11:20.682 "format": 0, 00:11:20.682 "firmware": 0, 00:11:20.682 "ns_manage": 0 00:11:20.682 }, 00:11:20.682 "multi_ctrlr": true, 00:11:20.682 "ana_reporting": false 00:11:20.682 }, 00:11:20.682 "vs": { 00:11:20.682 "nvme_version": "1.3" 00:11:20.682 }, 00:11:20.682 "ns_data": { 00:11:20.682 "id": 1, 00:11:20.682 "can_share": true 00:11:20.682 } 00:11:20.682 } 00:11:20.682 ], 00:11:20.682 "mp_policy": "active_passive" 00:11:20.682 } 00:11:20.682 } 00:11:20.682 ] 00:11:20.682 03:24:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=217716 00:11:20.682 03:24:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:20.682 03:24:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:20.682 Running I/O for 10 seconds... 00:11:22.063 Latency(us) 00:11:22.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.063 Nvme0n1 : 1.00 14168.00 55.34 0.00 0.00 0.00 0.00 0.00 00:11:22.063 =================================================================================================================== 00:11:22.063 Total : 14168.00 55.34 0.00 0.00 0.00 0.00 0.00 00:11:22.063 00:11:22.629 03:25:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:22.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.887 Nvme0n1 : 2.00 14410.00 56.29 0.00 0.00 0.00 0.00 0.00 00:11:22.887 =================================================================================================================== 00:11:22.887 Total : 14410.00 56.29 0.00 0.00 0.00 0.00 0.00 00:11:22.887 00:11:22.887 true 00:11:22.887 03:25:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:22.887 03:25:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:23.146 03:25:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:23.146 03:25:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:23.146 03:25:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 217716 00:11:23.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.715 Nvme0n1 : 3.00 14514.00 56.70 0.00 0.00 0.00 0.00 0.00 00:11:23.715 =================================================================================================================== 00:11:23.715 Total : 14514.00 56.70 0.00 0.00 0.00 0.00 0.00 00:11:23.715 00:11:25.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.097 Nvme0n1 : 4.00 14651.50 57.23 0.00 0.00 0.00 0.00 0.00 00:11:25.097 =================================================================================================================== 00:11:25.097 Total : 14651.50 57.23 0.00 0.00 0.00 0.00 0.00 00:11:25.097 00:11:26.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.037 Nvme0n1 : 5.00 14699.80 57.42 0.00 0.00 0.00 0.00 0.00 00:11:26.037 =================================================================================================================== 00:11:26.037 Total : 14699.80 57.42 0.00 0.00 0.00 0.00 0.00 00:11:26.037 00:11:26.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.977 Nvme0n1 : 6.00 14734.33 57.56 0.00 0.00 0.00 0.00 0.00 00:11:26.977 =================================================================================================================== 00:11:26.977 Total : 14734.33 57.56 0.00 0.00 0.00 0.00 0.00 00:11:26.977 00:11:27.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.916 Nvme0n1 : 7.00 14780.86 57.74 0.00 0.00 0.00 0.00 0.00 00:11:27.916 =================================================================================================================== 00:11:27.916 Total : 14780.86 57.74 0.00 0.00 0.00 0.00 0.00 00:11:27.916 00:11:28.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.856 Nvme0n1 : 8.00 14835.12 57.95 0.00 0.00 0.00 0.00 0.00 00:11:28.856 =================================================================================================================== 00:11:28.856 Total : 14835.12 57.95 0.00 0.00 0.00 0.00 0.00 00:11:28.856 00:11:29.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.793 Nvme0n1 : 9.00 14863.44 58.06 0.00 0.00 0.00 0.00 0.00 00:11:29.793 =================================================================================================================== 00:11:29.793 Total : 14863.44 58.06 0.00 0.00 0.00 0.00 0.00 00:11:29.793 00:11:30.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.732 Nvme0n1 : 10.00 14930.50 58.32 0.00 0.00 0.00 0.00 0.00 00:11:30.732 =================================================================================================================== 00:11:30.732 Total : 14930.50 58.32 0.00 0.00 0.00 0.00 0.00 00:11:30.732 00:11:30.732 00:11:30.732 Latency(us) 00:11:30.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.733 Nvme0n1 : 10.01 14928.06 58.31 0.00 0.00 8568.75 4587.52 17185.00 00:11:30.733 =================================================================================================================== 00:11:30.733 Total : 14928.06 58.31 0.00 0.00 8568.75 4587.52 17185.00 00:11:30.733 0 00:11:30.733 03:25:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 217581 00:11:30.733 03:25:08 -- common/autotest_common.sh@936 -- # '[' -z 217581 ']' 00:11:30.733 03:25:08 -- common/autotest_common.sh@940 -- # kill -0 217581 00:11:30.733 03:25:08 -- common/autotest_common.sh@941 -- # uname 00:11:30.733 03:25:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.733 03:25:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 217581 00:11:30.991 03:25:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:30.991 03:25:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:30.991 03:25:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 217581' 00:11:30.991 killing process with pid 217581 00:11:30.991 03:25:08 -- common/autotest_common.sh@955 -- # kill 217581 00:11:30.991 Received shutdown signal, test time was about 10.000000 seconds 00:11:30.991 00:11:30.991 Latency(us) 00:11:30.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.991 =================================================================================================================== 00:11:30.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:30.991 03:25:08 -- common/autotest_common.sh@960 -- # wait 217581 00:11:31.251 03:25:08 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:31.510 03:25:08 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:31.510 03:25:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:31.769 03:25:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:31.769 03:25:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:11:31.769 03:25:09 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:32.027 [2024-04-19 03:25:09.358310] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:32.027 03:25:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:32.027 03:25:09 -- common/autotest_common.sh@638 -- # local es=0 00:11:32.027 03:25:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:32.027 03:25:09 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.027 03:25:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:32.027 03:25:09 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.027 03:25:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:32.027 03:25:09 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.027 03:25:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:32.027 03:25:09 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.027 03:25:09 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:32.027 03:25:09 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:32.287 request: 00:11:32.287 { 00:11:32.287 "uuid": "a5bef148-7929-45de-9a2d-4f50c105a168", 00:11:32.287 "method": "bdev_lvol_get_lvstores", 00:11:32.287 "req_id": 1 00:11:32.287 } 00:11:32.287 Got JSON-RPC error response 00:11:32.287 response: 00:11:32.287 { 00:11:32.287 "code": -19, 00:11:32.287 "message": "No such device" 00:11:32.287 } 00:11:32.287 03:25:09 -- common/autotest_common.sh@641 -- # es=1 00:11:32.287 03:25:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:32.287 03:25:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:32.287 03:25:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:32.287 03:25:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:32.546 aio_bdev 00:11:32.547 03:25:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 497bee97-0b08-4ae0-86b5-31c230f05bdf 00:11:32.547 03:25:09 -- common/autotest_common.sh@885 -- # local bdev_name=497bee97-0b08-4ae0-86b5-31c230f05bdf 00:11:32.547 03:25:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:32.547 03:25:09 -- common/autotest_common.sh@887 -- # local i 00:11:32.547 03:25:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:32.547 03:25:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:32.547 03:25:09 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:32.806 03:25:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 497bee97-0b08-4ae0-86b5-31c230f05bdf -t 2000 00:11:33.064 [ 00:11:33.064 { 00:11:33.064 "name": "497bee97-0b08-4ae0-86b5-31c230f05bdf", 00:11:33.064 "aliases": [ 00:11:33.064 "lvs/lvol" 00:11:33.064 ], 00:11:33.064 "product_name": "Logical Volume", 00:11:33.064 "block_size": 4096, 00:11:33.064 "num_blocks": 38912, 00:11:33.064 "uuid": "497bee97-0b08-4ae0-86b5-31c230f05bdf", 00:11:33.064 "assigned_rate_limits": { 00:11:33.064 "rw_ios_per_sec": 0, 00:11:33.064 "rw_mbytes_per_sec": 0, 00:11:33.064 "r_mbytes_per_sec": 0, 00:11:33.064 "w_mbytes_per_sec": 0 00:11:33.064 }, 00:11:33.064 "claimed": false, 00:11:33.064 "zoned": false, 00:11:33.064 "supported_io_types": { 00:11:33.064 "read": true, 00:11:33.064 "write": true, 00:11:33.064 "unmap": true, 00:11:33.064 "write_zeroes": true, 00:11:33.064 "flush": false, 00:11:33.064 "reset": true, 00:11:33.064 "compare": false, 00:11:33.064 "compare_and_write": false, 00:11:33.064 "abort": false, 00:11:33.064 "nvme_admin": false, 00:11:33.064 "nvme_io": false 00:11:33.064 }, 00:11:33.064 "driver_specific": { 00:11:33.064 "lvol": { 00:11:33.064 "lvol_store_uuid": "a5bef148-7929-45de-9a2d-4f50c105a168", 00:11:33.064 "base_bdev": "aio_bdev", 00:11:33.064 "thin_provision": false, 00:11:33.064 "snapshot": false, 00:11:33.064 "clone": false, 00:11:33.064 "esnap_clone": false 00:11:33.064 } 00:11:33.064 } 00:11:33.064 } 00:11:33.064 ] 00:11:33.064 03:25:10 -- common/autotest_common.sh@893 -- # return 0 00:11:33.064 03:25:10 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:33.064 03:25:10 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:33.323 03:25:10 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:33.323 03:25:10 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:33.323 03:25:10 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:33.323 03:25:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:33.323 03:25:10 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 497bee97-0b08-4ae0-86b5-31c230f05bdf 00:11:33.582 03:25:11 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5bef148-7929-45de-9a2d-4f50c105a168 00:11:33.841 03:25:11 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:34.099 03:25:11 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.099 00:11:34.099 real 0m17.059s 00:11:34.099 user 0m16.540s 00:11:34.099 sys 0m1.892s 00:11:34.099 03:25:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.099 03:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:34.099 ************************************ 00:11:34.099 END TEST lvs_grow_clean 00:11:34.099 ************************************ 00:11:34.099 03:25:11 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:34.099 03:25:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:34.099 03:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.099 03:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:34.357 ************************************ 00:11:34.357 START TEST lvs_grow_dirty 00:11:34.357 ************************************ 00:11:34.357 03:25:11 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.357 03:25:11 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:34.616 03:25:12 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:34.616 03:25:12 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:34.875 03:25:12 -- target/nvmf_lvs_grow.sh@28 -- # lvs=94b15c17-df48-4e66-ba25-633d864c912d 00:11:34.875 03:25:12 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:34.875 03:25:12 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:35.135 03:25:12 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:35.135 03:25:12 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:35.135 03:25:12 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94b15c17-df48-4e66-ba25-633d864c912d lvol 150 00:11:35.395 03:25:12 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0986957f-f039-464c-9ad5-4be65859e27f 00:11:35.395 03:25:12 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:35.395 03:25:12 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:35.655 [2024-04-19 03:25:12.997454] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:35.655 [2024-04-19 03:25:12.997536] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:35.655 true 00:11:35.655 03:25:13 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:35.655 03:25:13 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:35.959 03:25:13 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:35.959 03:25:13 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:36.217 03:25:13 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0986957f-f039-464c-9ad5-4be65859e27f 00:11:36.475 03:25:13 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:36.734 03:25:14 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:36.991 03:25:14 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=219636 00:11:36.991 03:25:14 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:36.991 03:25:14 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:36.991 03:25:14 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 219636 /var/tmp/bdevperf.sock 00:11:36.991 03:25:14 -- common/autotest_common.sh@817 -- # '[' -z 219636 ']' 00:11:36.991 03:25:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:36.991 03:25:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:36.991 03:25:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:36.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:36.991 03:25:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:36.991 03:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 [2024-04-19 03:25:14.338122] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:36.991 [2024-04-19 03:25:14.338190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219636 ] 00:11:36.991 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.991 [2024-04-19 03:25:14.398809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.991 [2024-04-19 03:25:14.514215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.249 03:25:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:37.249 03:25:14 -- common/autotest_common.sh@850 -- # return 0 00:11:37.249 03:25:14 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:37.816 Nvme0n1 00:11:37.816 03:25:15 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:37.816 [ 00:11:37.816 { 00:11:37.816 "name": "Nvme0n1", 00:11:37.816 "aliases": [ 00:11:37.816 "0986957f-f039-464c-9ad5-4be65859e27f" 00:11:37.816 ], 00:11:37.816 "product_name": "NVMe disk", 00:11:37.816 "block_size": 4096, 00:11:37.816 "num_blocks": 38912, 00:11:37.816 "uuid": "0986957f-f039-464c-9ad5-4be65859e27f", 00:11:37.816 "assigned_rate_limits": { 00:11:37.816 "rw_ios_per_sec": 0, 00:11:37.816 "rw_mbytes_per_sec": 0, 00:11:37.816 "r_mbytes_per_sec": 0, 00:11:37.816 "w_mbytes_per_sec": 0 00:11:37.816 }, 00:11:37.816 "claimed": false, 00:11:37.816 "zoned": false, 00:11:37.816 "supported_io_types": { 00:11:37.816 "read": true, 00:11:37.816 "write": true, 00:11:37.816 "unmap": true, 00:11:37.816 "write_zeroes": true, 00:11:37.816 "flush": true, 00:11:37.816 "reset": true, 00:11:37.816 "compare": true, 00:11:37.817 "compare_and_write": true, 00:11:37.817 "abort": true, 00:11:37.817 "nvme_admin": true, 00:11:37.817 "nvme_io": true 00:11:37.817 }, 00:11:37.817 "memory_domains": [ 00:11:37.817 { 00:11:37.817 "dma_device_id": "system", 00:11:37.817 "dma_device_type": 1 00:11:37.817 } 00:11:37.817 ], 00:11:37.817 "driver_specific": { 00:11:37.817 "nvme": [ 00:11:37.817 { 00:11:37.817 "trid": { 00:11:37.817 "trtype": "TCP", 00:11:37.817 "adrfam": "IPv4", 00:11:37.817 "traddr": "10.0.0.2", 00:11:37.817 "trsvcid": "4420", 00:11:37.817 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:37.817 }, 00:11:37.817 "ctrlr_data": { 00:11:37.817 "cntlid": 1, 00:11:37.817 "vendor_id": "0x8086", 00:11:37.817 "model_number": "SPDK bdev Controller", 00:11:37.817 "serial_number": "SPDK0", 00:11:37.817 "firmware_revision": "24.05", 00:11:37.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:37.817 "oacs": { 00:11:37.817 "security": 0, 00:11:37.817 "format": 0, 00:11:37.817 "firmware": 0, 00:11:37.817 "ns_manage": 0 00:11:37.817 }, 00:11:37.817 "multi_ctrlr": true, 00:11:37.817 "ana_reporting": false 00:11:37.817 }, 00:11:37.817 "vs": { 00:11:37.817 "nvme_version": "1.3" 00:11:37.817 }, 00:11:37.817 "ns_data": { 00:11:37.817 "id": 1, 00:11:37.817 "can_share": true 00:11:37.817 } 00:11:37.817 } 00:11:37.817 ], 00:11:37.817 "mp_policy": "active_passive" 00:11:37.817 } 00:11:37.817 } 00:11:37.817 ] 00:11:37.817 03:25:15 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=219771 00:11:37.817 03:25:15 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:37.817 03:25:15 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:38.076 Running I/O for 10 seconds... 00:11:39.015 Latency(us) 00:11:39.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.015 Nvme0n1 : 1.00 14411.00 56.29 0.00 0.00 0.00 0.00 0.00 00:11:39.015 =================================================================================================================== 00:11:39.015 Total : 14411.00 56.29 0.00 0.00 0.00 0.00 0.00 00:11:39.015 00:11:40.005 03:25:17 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:40.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.005 Nvme0n1 : 2.00 14011.00 54.73 0.00 0.00 0.00 0.00 0.00 00:11:40.005 =================================================================================================================== 00:11:40.005 Total : 14011.00 54.73 0.00 0.00 0.00 0.00 0.00 00:11:40.005 00:11:40.263 true 00:11:40.263 03:25:17 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:40.263 03:25:17 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:40.521 03:25:17 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:40.521 03:25:17 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:40.521 03:25:17 -- target/nvmf_lvs_grow.sh@65 -- # wait 219771 00:11:41.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.090 Nvme0n1 : 3.00 13866.00 54.16 0.00 0.00 0.00 0.00 0.00 00:11:41.090 =================================================================================================================== 00:11:41.090 Total : 13866.00 54.16 0.00 0.00 0.00 0.00 0.00 00:11:41.090 00:11:42.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.028 Nvme0n1 : 4.00 13819.50 53.98 0.00 0.00 0.00 0.00 0.00 00:11:42.028 =================================================================================================================== 00:11:42.028 Total : 13819.50 53.98 0.00 0.00 0.00 0.00 0.00 00:11:42.028 00:11:42.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.967 Nvme0n1 : 5.00 13806.00 53.93 0.00 0.00 0.00 0.00 0.00 00:11:42.967 =================================================================================================================== 00:11:42.967 Total : 13806.00 53.93 0.00 0.00 0.00 0.00 0.00 00:11:42.967 00:11:44.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.348 Nvme0n1 : 6.00 13809.00 53.94 0.00 0.00 0.00 0.00 0.00 00:11:44.348 =================================================================================================================== 00:11:44.348 Total : 13809.00 53.94 0.00 0.00 0.00 0.00 0.00 00:11:44.348 00:11:45.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.289 Nvme0n1 : 7.00 13810.00 53.95 0.00 0.00 0.00 0.00 0.00 00:11:45.289 =================================================================================================================== 00:11:45.289 Total : 13810.00 53.95 0.00 0.00 0.00 0.00 0.00 00:11:45.289 00:11:46.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.232 Nvme0n1 : 8.00 13817.75 53.98 0.00 0.00 0.00 0.00 0.00 00:11:46.232 =================================================================================================================== 00:11:46.232 Total : 13817.75 53.98 0.00 0.00 0.00 0.00 0.00 00:11:46.232 00:11:47.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.172 Nvme0n1 : 9.00 13823.78 54.00 0.00 0.00 0.00 0.00 0.00 00:11:47.172 =================================================================================================================== 00:11:47.172 Total : 13823.78 54.00 0.00 0.00 0.00 0.00 0.00 00:11:47.172 00:11:48.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.116 Nvme0n1 : 10.00 13829.40 54.02 0.00 0.00 0.00 0.00 0.00 00:11:48.116 =================================================================================================================== 00:11:48.116 Total : 13829.40 54.02 0.00 0.00 0.00 0.00 0.00 00:11:48.116 00:11:48.116 00:11:48.116 Latency(us) 00:11:48.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.116 Nvme0n1 : 10.01 13829.11 54.02 0.00 0.00 9246.30 2645.71 17185.00 00:11:48.116 =================================================================================================================== 00:11:48.116 Total : 13829.11 54.02 0.00 0.00 9246.30 2645.71 17185.00 00:11:48.116 0 00:11:48.116 03:25:25 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 219636 00:11:48.116 03:25:25 -- common/autotest_common.sh@936 -- # '[' -z 219636 ']' 00:11:48.116 03:25:25 -- common/autotest_common.sh@940 -- # kill -0 219636 00:11:48.116 03:25:25 -- common/autotest_common.sh@941 -- # uname 00:11:48.116 03:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.116 03:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 219636 00:11:48.116 03:25:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:48.116 03:25:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:48.116 03:25:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 219636' 00:11:48.116 killing process with pid 219636 00:11:48.116 03:25:25 -- common/autotest_common.sh@955 -- # kill 219636 00:11:48.116 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.116 00:11:48.116 Latency(us) 00:11:48.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.116 =================================================================================================================== 00:11:48.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:48.116 03:25:25 -- common/autotest_common.sh@960 -- # wait 219636 00:11:48.375 03:25:25 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:48.632 03:25:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:48.632 03:25:26 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:48.891 03:25:26 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:48.891 03:25:26 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:11:48.891 03:25:26 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 217137 00:11:48.891 03:25:26 -- target/nvmf_lvs_grow.sh@74 -- # wait 217137 00:11:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 217137 Killed "${NVMF_APP[@]}" "$@" 00:11:48.891 03:25:26 -- target/nvmf_lvs_grow.sh@74 -- # true 00:11:48.891 03:25:26 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:11:48.891 03:25:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:48.891 03:25:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:48.891 03:25:26 -- common/autotest_common.sh@10 -- # set +x 00:11:48.891 03:25:26 -- nvmf/common.sh@470 -- # nvmfpid=221100 00:11:48.891 03:25:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:48.891 03:25:26 -- nvmf/common.sh@471 -- # waitforlisten 221100 00:11:48.891 03:25:26 -- common/autotest_common.sh@817 -- # '[' -z 221100 ']' 00:11:48.891 03:25:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.891 03:25:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:48.891 03:25:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.891 03:25:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:48.891 03:25:26 -- common/autotest_common.sh@10 -- # set +x 00:11:48.891 [2024-04-19 03:25:26.416970] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:48.891 [2024-04-19 03:25:26.417043] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.150 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.150 [2024-04-19 03:25:26.481876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.150 [2024-04-19 03:25:26.586087] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.150 [2024-04-19 03:25:26.586153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.150 [2024-04-19 03:25:26.586166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.150 [2024-04-19 03:25:26.586177] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.150 [2024-04-19 03:25:26.586194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.150 [2024-04-19 03:25:26.586230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.150 03:25:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:49.150 03:25:26 -- common/autotest_common.sh@850 -- # return 0 00:11:49.150 03:25:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:49.150 03:25:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:49.150 03:25:26 -- common/autotest_common.sh@10 -- # set +x 00:11:49.408 03:25:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.408 03:25:26 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:49.667 [2024-04-19 03:25:26.978149] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:49.667 [2024-04-19 03:25:26.978290] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:49.667 [2024-04-19 03:25:26.978347] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:49.667 03:25:26 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:11:49.667 03:25:26 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 0986957f-f039-464c-9ad5-4be65859e27f 00:11:49.667 03:25:26 -- common/autotest_common.sh@885 -- # local bdev_name=0986957f-f039-464c-9ad5-4be65859e27f 00:11:49.667 03:25:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:49.667 03:25:26 -- common/autotest_common.sh@887 -- # local i 00:11:49.667 03:25:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:49.667 03:25:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:49.667 03:25:26 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:49.926 03:25:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0986957f-f039-464c-9ad5-4be65859e27f -t 2000 00:11:49.926 [ 00:11:49.926 { 00:11:49.926 "name": "0986957f-f039-464c-9ad5-4be65859e27f", 00:11:49.926 "aliases": [ 00:11:49.926 "lvs/lvol" 00:11:49.926 ], 00:11:49.926 "product_name": "Logical Volume", 00:11:49.926 "block_size": 4096, 00:11:49.926 "num_blocks": 38912, 00:11:49.926 "uuid": "0986957f-f039-464c-9ad5-4be65859e27f", 00:11:49.926 "assigned_rate_limits": { 00:11:49.926 "rw_ios_per_sec": 0, 00:11:49.926 "rw_mbytes_per_sec": 0, 00:11:49.926 "r_mbytes_per_sec": 0, 00:11:49.926 "w_mbytes_per_sec": 0 00:11:49.926 }, 00:11:49.926 "claimed": false, 00:11:49.926 "zoned": false, 00:11:49.926 "supported_io_types": { 00:11:49.926 "read": true, 00:11:49.926 "write": true, 00:11:49.926 "unmap": true, 00:11:49.926 "write_zeroes": true, 00:11:49.926 "flush": false, 00:11:49.926 "reset": true, 00:11:49.926 "compare": false, 00:11:49.926 "compare_and_write": false, 00:11:49.926 "abort": false, 00:11:49.926 "nvme_admin": false, 00:11:49.926 "nvme_io": false 00:11:49.926 }, 00:11:49.926 "driver_specific": { 00:11:49.926 "lvol": { 00:11:49.926 "lvol_store_uuid": "94b15c17-df48-4e66-ba25-633d864c912d", 00:11:49.926 "base_bdev": "aio_bdev", 00:11:49.926 "thin_provision": false, 00:11:49.926 "snapshot": false, 00:11:49.926 "clone": false, 00:11:49.926 "esnap_clone": false 00:11:49.926 } 00:11:49.926 } 00:11:49.926 } 00:11:49.926 ] 00:11:49.926 03:25:27 -- common/autotest_common.sh@893 -- # return 0 00:11:49.926 03:25:27 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:49.926 03:25:27 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:11:50.184 03:25:27 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:11:50.184 03:25:27 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:11:50.184 03:25:27 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:50.441 03:25:27 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:11:50.441 03:25:27 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:50.698 [2024-04-19 03:25:28.227098] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:50.957 03:25:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:50.957 03:25:28 -- common/autotest_common.sh@638 -- # local es=0 00:11:50.957 03:25:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:50.957 03:25:28 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.957 03:25:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.958 03:25:28 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.958 03:25:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.958 03:25:28 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.958 03:25:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.958 03:25:28 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.958 03:25:28 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:50.958 03:25:28 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:51.218 request: 00:11:51.218 { 00:11:51.218 "uuid": "94b15c17-df48-4e66-ba25-633d864c912d", 00:11:51.218 "method": "bdev_lvol_get_lvstores", 00:11:51.218 "req_id": 1 00:11:51.218 } 00:11:51.218 Got JSON-RPC error response 00:11:51.218 response: 00:11:51.218 { 00:11:51.218 "code": -19, 00:11:51.218 "message": "No such device" 00:11:51.218 } 00:11:51.218 03:25:28 -- common/autotest_common.sh@641 -- # es=1 00:11:51.218 03:25:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.218 03:25:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:51.218 03:25:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.218 03:25:28 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.477 aio_bdev 00:11:51.477 03:25:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0986957f-f039-464c-9ad5-4be65859e27f 00:11:51.477 03:25:28 -- common/autotest_common.sh@885 -- # local bdev_name=0986957f-f039-464c-9ad5-4be65859e27f 00:11:51.477 03:25:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:51.477 03:25:28 -- common/autotest_common.sh@887 -- # local i 00:11:51.477 03:25:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:51.477 03:25:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:51.477 03:25:28 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:51.477 03:25:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0986957f-f039-464c-9ad5-4be65859e27f -t 2000 00:11:51.736 [ 00:11:51.736 { 00:11:51.736 "name": "0986957f-f039-464c-9ad5-4be65859e27f", 00:11:51.736 "aliases": [ 00:11:51.736 "lvs/lvol" 00:11:51.736 ], 00:11:51.736 "product_name": "Logical Volume", 00:11:51.736 "block_size": 4096, 00:11:51.736 "num_blocks": 38912, 00:11:51.736 "uuid": "0986957f-f039-464c-9ad5-4be65859e27f", 00:11:51.736 "assigned_rate_limits": { 00:11:51.736 "rw_ios_per_sec": 0, 00:11:51.736 "rw_mbytes_per_sec": 0, 00:11:51.736 "r_mbytes_per_sec": 0, 00:11:51.736 "w_mbytes_per_sec": 0 00:11:51.736 }, 00:11:51.736 "claimed": false, 00:11:51.736 "zoned": false, 00:11:51.736 "supported_io_types": { 00:11:51.736 "read": true, 00:11:51.736 "write": true, 00:11:51.736 "unmap": true, 00:11:51.736 "write_zeroes": true, 00:11:51.736 "flush": false, 00:11:51.736 "reset": true, 00:11:51.736 "compare": false, 00:11:51.736 "compare_and_write": false, 00:11:51.736 "abort": false, 00:11:51.736 "nvme_admin": false, 00:11:51.736 "nvme_io": false 00:11:51.736 }, 00:11:51.736 "driver_specific": { 00:11:51.736 "lvol": { 00:11:51.736 "lvol_store_uuid": "94b15c17-df48-4e66-ba25-633d864c912d", 00:11:51.736 "base_bdev": "aio_bdev", 00:11:51.736 "thin_provision": false, 00:11:51.736 "snapshot": false, 00:11:51.736 "clone": false, 00:11:51.736 "esnap_clone": false 00:11:51.736 } 00:11:51.736 } 00:11:51.736 } 00:11:51.736 ] 00:11:51.995 03:25:29 -- common/autotest_common.sh@893 -- # return 0 00:11:51.996 03:25:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:51.996 03:25:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:52.255 03:25:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:52.255 03:25:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:52.255 03:25:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:52.255 03:25:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:52.255 03:25:29 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0986957f-f039-464c-9ad5-4be65859e27f 00:11:52.514 03:25:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94b15c17-df48-4e66-ba25-633d864c912d 00:11:53.081 03:25:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:53.081 03:25:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:53.081 00:11:53.081 real 0m18.883s 00:11:53.081 user 0m46.483s 00:11:53.081 sys 0m5.241s 00:11:53.081 03:25:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:53.081 03:25:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.081 ************************************ 00:11:53.081 END TEST lvs_grow_dirty 00:11:53.081 ************************************ 00:11:53.341 03:25:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:53.341 03:25:30 -- common/autotest_common.sh@794 -- # type=--id 00:11:53.341 03:25:30 -- common/autotest_common.sh@795 -- # id=0 00:11:53.341 03:25:30 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:53.341 03:25:30 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:53.341 03:25:30 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:53.341 03:25:30 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:53.341 03:25:30 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:53.341 03:25:30 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:53.341 nvmf_trace.0 00:11:53.341 03:25:30 -- common/autotest_common.sh@809 -- # return 0 00:11:53.341 03:25:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:53.341 03:25:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:53.341 03:25:30 -- nvmf/common.sh@117 -- # sync 00:11:53.341 03:25:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.341 03:25:30 -- nvmf/common.sh@120 -- # set +e 00:11:53.341 03:25:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.341 03:25:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.341 rmmod nvme_tcp 00:11:53.341 rmmod nvme_fabrics 00:11:53.341 rmmod nvme_keyring 00:11:53.341 03:25:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.341 03:25:30 -- nvmf/common.sh@124 -- # set -e 00:11:53.341 03:25:30 -- nvmf/common.sh@125 -- # return 0 00:11:53.341 03:25:30 -- nvmf/common.sh@478 -- # '[' -n 221100 ']' 00:11:53.341 03:25:30 -- nvmf/common.sh@479 -- # killprocess 221100 00:11:53.341 03:25:30 -- common/autotest_common.sh@936 -- # '[' -z 221100 ']' 00:11:53.341 03:25:30 -- common/autotest_common.sh@940 -- # kill -0 221100 00:11:53.341 03:25:30 -- common/autotest_common.sh@941 -- # uname 00:11:53.341 03:25:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.341 03:25:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 221100 00:11:53.341 03:25:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:53.341 03:25:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:53.341 03:25:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 221100' 00:11:53.341 killing process with pid 221100 00:11:53.341 03:25:30 -- common/autotest_common.sh@955 -- # kill 221100 00:11:53.341 03:25:30 -- common/autotest_common.sh@960 -- # wait 221100 00:11:53.600 03:25:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:53.600 03:25:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:53.600 03:25:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:53.600 03:25:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.600 03:25:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:53.600 03:25:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.600 03:25:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.600 03:25:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.163 03:25:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.163 00:11:56.163 real 0m41.406s 00:11:56.163 user 1m8.743s 00:11:56.163 sys 0m9.069s 00:11:56.163 03:25:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:56.163 03:25:33 -- common/autotest_common.sh@10 -- # set +x 00:11:56.163 ************************************ 00:11:56.163 END TEST nvmf_lvs_grow 00:11:56.163 ************************************ 00:11:56.163 03:25:33 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:56.163 03:25:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:56.163 03:25:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.163 03:25:33 -- common/autotest_common.sh@10 -- # set +x 00:11:56.163 ************************************ 00:11:56.163 START TEST nvmf_bdev_io_wait 00:11:56.163 ************************************ 00:11:56.163 03:25:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:56.163 * Looking for test storage... 00:11:56.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.163 03:25:33 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.163 03:25:33 -- nvmf/common.sh@7 -- # uname -s 00:11:56.163 03:25:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.163 03:25:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.163 03:25:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.163 03:25:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.163 03:25:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.163 03:25:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.163 03:25:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.163 03:25:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.163 03:25:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.163 03:25:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.163 03:25:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.163 03:25:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.163 03:25:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.163 03:25:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.163 03:25:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.163 03:25:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.163 03:25:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.163 03:25:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.163 03:25:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.163 03:25:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.163 03:25:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.163 03:25:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.163 03:25:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.163 03:25:33 -- paths/export.sh@5 -- # export PATH 00:11:56.163 03:25:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.163 03:25:33 -- nvmf/common.sh@47 -- # : 0 00:11:56.163 03:25:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.163 03:25:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.163 03:25:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.163 03:25:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.163 03:25:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.163 03:25:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.163 03:25:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.163 03:25:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.163 03:25:33 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.163 03:25:33 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.163 03:25:33 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:56.163 03:25:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:56.163 03:25:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.163 03:25:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:56.163 03:25:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:56.163 03:25:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:56.163 03:25:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.163 03:25:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.163 03:25:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.163 03:25:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:56.163 03:25:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:56.163 03:25:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.163 03:25:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.070 03:25:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:58.070 03:25:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.070 03:25:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.070 03:25:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.070 03:25:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.070 03:25:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.070 03:25:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.070 03:25:35 -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.070 03:25:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.070 03:25:35 -- nvmf/common.sh@296 -- # e810=() 00:11:58.070 03:25:35 -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.070 03:25:35 -- nvmf/common.sh@297 -- # x722=() 00:11:58.070 03:25:35 -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.070 03:25:35 -- nvmf/common.sh@298 -- # mlx=() 00:11:58.070 03:25:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.070 03:25:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.070 03:25:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.070 03:25:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.070 03:25:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.070 03:25:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.070 03:25:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.070 03:25:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.070 03:25:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.070 03:25:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.070 03:25:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.070 03:25:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.070 03:25:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:58.070 03:25:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.070 03:25:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.070 03:25:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.070 03:25:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.070 03:25:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.070 03:25:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:58.070 03:25:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.070 03:25:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.070 03:25:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.070 03:25:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:58.070 03:25:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:58.070 03:25:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:58.070 03:25:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:58.070 03:25:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.070 03:25:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.070 03:25:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.070 03:25:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.071 03:25:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.071 03:25:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.071 03:25:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.071 03:25:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.071 03:25:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.071 03:25:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.071 03:25:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.071 03:25:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.071 03:25:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.071 03:25:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.071 03:25:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.071 03:25:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.071 03:25:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.071 03:25:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.071 03:25:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.071 03:25:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:11:58.071 00:11:58.071 --- 10.0.0.2 ping statistics --- 00:11:58.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.071 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:58.071 03:25:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:11:58.071 00:11:58.071 --- 10.0.0.1 ping statistics --- 00:11:58.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.071 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:11:58.071 03:25:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.071 03:25:35 -- nvmf/common.sh@411 -- # return 0 00:11:58.071 03:25:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:58.071 03:25:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.071 03:25:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:58.071 03:25:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:58.071 03:25:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.071 03:25:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:58.071 03:25:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:58.071 03:25:35 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:58.071 03:25:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:58.071 03:25:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:58.071 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.071 03:25:35 -- nvmf/common.sh@470 -- # nvmfpid=223630 00:11:58.071 03:25:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:58.071 03:25:35 -- nvmf/common.sh@471 -- # waitforlisten 223630 00:11:58.071 03:25:35 -- common/autotest_common.sh@817 -- # '[' -z 223630 ']' 00:11:58.071 03:25:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.071 03:25:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.071 03:25:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.071 03:25:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.071 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.071 [2024-04-19 03:25:35.532136] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:58.071 [2024-04-19 03:25:35.532216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.071 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.071 [2024-04-19 03:25:35.596681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.329 [2024-04-19 03:25:35.709049] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.329 [2024-04-19 03:25:35.709110] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.329 [2024-04-19 03:25:35.709123] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.330 [2024-04-19 03:25:35.709133] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.330 [2024-04-19 03:25:35.709143] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.330 [2024-04-19 03:25:35.709227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.330 [2024-04-19 03:25:35.709288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.330 [2024-04-19 03:25:35.709355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.330 [2024-04-19 03:25:35.709358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.330 03:25:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.330 03:25:35 -- common/autotest_common.sh@850 -- # return 0 00:11:58.330 03:25:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:58.330 03:25:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:58.330 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.330 03:25:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.330 03:25:35 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:58.330 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.330 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.330 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.330 03:25:35 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:58.330 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.330 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.330 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.330 03:25:35 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.330 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.330 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.330 [2024-04-19 03:25:35.846289] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.330 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.330 03:25:35 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.330 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.330 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.589 Malloc0 00:11:58.589 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.589 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.589 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.589 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.589 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.589 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.589 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.589 03:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.589 03:25:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.589 [2024-04-19 03:25:35.915002] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.589 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=223658 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@30 -- # READ_PID=223660 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # config=() 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # local subsystem config 00:11:58.589 03:25:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=223662 00:11:58.589 03:25:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:58.589 { 00:11:58.589 "params": { 00:11:58.589 "name": "Nvme$subsystem", 00:11:58.589 "trtype": "$TEST_TRANSPORT", 00:11:58.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.589 "adrfam": "ipv4", 00:11:58.589 "trsvcid": "$NVMF_PORT", 00:11:58.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.589 "hdgst": ${hdgst:-false}, 00:11:58.589 "ddgst": ${ddgst:-false} 00:11:58.589 }, 00:11:58.589 "method": "bdev_nvme_attach_controller" 00:11:58.589 } 00:11:58.589 EOF 00:11:58.589 )") 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # config=() 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # local subsystem config 00:11:58.589 03:25:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:58.589 03:25:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:58.589 { 00:11:58.589 "params": { 00:11:58.589 "name": "Nvme$subsystem", 00:11:58.589 "trtype": "$TEST_TRANSPORT", 00:11:58.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.589 "adrfam": "ipv4", 00:11:58.589 "trsvcid": "$NVMF_PORT", 00:11:58.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.589 "hdgst": ${hdgst:-false}, 00:11:58.589 "ddgst": ${ddgst:-false} 00:11:58.589 }, 00:11:58.589 "method": "bdev_nvme_attach_controller" 00:11:58.589 } 00:11:58.589 EOF 00:11:58.589 )") 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=223664 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # config=() 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@35 -- # sync 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # local subsystem config 00:11:58.589 03:25:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:58.589 03:25:35 -- nvmf/common.sh@543 -- # cat 00:11:58.589 03:25:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:58.589 { 00:11:58.589 "params": { 00:11:58.589 "name": "Nvme$subsystem", 00:11:58.589 "trtype": "$TEST_TRANSPORT", 00:11:58.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.589 "adrfam": "ipv4", 00:11:58.589 "trsvcid": "$NVMF_PORT", 00:11:58.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.589 "hdgst": ${hdgst:-false}, 00:11:58.589 "ddgst": ${ddgst:-false} 00:11:58.589 }, 00:11:58.589 "method": "bdev_nvme_attach_controller" 00:11:58.589 } 00:11:58.589 EOF 00:11:58.589 )") 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:58.589 03:25:35 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # config=() 00:11:58.589 03:25:35 -- nvmf/common.sh@543 -- # cat 00:11:58.589 03:25:35 -- nvmf/common.sh@521 -- # local subsystem config 00:11:58.589 03:25:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:58.589 03:25:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:58.589 { 00:11:58.589 "params": { 00:11:58.589 "name": "Nvme$subsystem", 00:11:58.589 "trtype": "$TEST_TRANSPORT", 00:11:58.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.589 "adrfam": "ipv4", 00:11:58.589 "trsvcid": "$NVMF_PORT", 00:11:58.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.590 "hdgst": ${hdgst:-false}, 00:11:58.590 "ddgst": ${ddgst:-false} 00:11:58.590 }, 00:11:58.590 "method": "bdev_nvme_attach_controller" 00:11:58.590 } 00:11:58.590 EOF 00:11:58.590 )") 00:11:58.590 03:25:35 -- nvmf/common.sh@543 -- # cat 00:11:58.590 03:25:35 -- target/bdev_io_wait.sh@37 -- # wait 223658 00:11:58.590 03:25:35 -- nvmf/common.sh@543 -- # cat 00:11:58.590 03:25:35 -- nvmf/common.sh@545 -- # jq . 00:11:58.590 03:25:35 -- nvmf/common.sh@545 -- # jq . 00:11:58.590 03:25:35 -- nvmf/common.sh@545 -- # jq . 00:11:58.590 03:25:35 -- nvmf/common.sh@546 -- # IFS=, 00:11:58.590 03:25:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:58.590 "params": { 00:11:58.590 "name": "Nvme1", 00:11:58.590 "trtype": "tcp", 00:11:58.590 "traddr": "10.0.0.2", 00:11:58.590 "adrfam": "ipv4", 00:11:58.590 "trsvcid": "4420", 00:11:58.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.590 "hdgst": false, 00:11:58.590 "ddgst": false 00:11:58.590 }, 00:11:58.590 "method": "bdev_nvme_attach_controller" 00:11:58.590 }' 00:11:58.590 03:25:35 -- nvmf/common.sh@545 -- # jq . 00:11:58.590 03:25:35 -- nvmf/common.sh@546 -- # IFS=, 00:11:58.590 03:25:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:58.590 "params": { 00:11:58.590 "name": "Nvme1", 00:11:58.590 "trtype": "tcp", 00:11:58.590 "traddr": "10.0.0.2", 00:11:58.590 "adrfam": "ipv4", 00:11:58.590 "trsvcid": "4420", 00:11:58.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.590 "hdgst": false, 00:11:58.590 "ddgst": false 00:11:58.590 }, 00:11:58.590 "method": "bdev_nvme_attach_controller" 00:11:58.590 }' 00:11:58.590 03:25:35 -- nvmf/common.sh@546 -- # IFS=, 00:11:58.590 03:25:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:58.590 "params": { 00:11:58.590 "name": "Nvme1", 00:11:58.590 "trtype": "tcp", 00:11:58.590 "traddr": "10.0.0.2", 00:11:58.590 "adrfam": "ipv4", 00:11:58.590 "trsvcid": "4420", 00:11:58.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.590 "hdgst": false, 00:11:58.590 "ddgst": false 00:11:58.590 }, 00:11:58.590 "method": "bdev_nvme_attach_controller" 00:11:58.590 }' 00:11:58.590 03:25:35 -- nvmf/common.sh@546 -- # IFS=, 00:11:58.590 03:25:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:58.590 "params": { 00:11:58.590 "name": "Nvme1", 00:11:58.590 "trtype": "tcp", 00:11:58.590 "traddr": "10.0.0.2", 00:11:58.590 "adrfam": "ipv4", 00:11:58.590 "trsvcid": "4420", 00:11:58.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.590 "hdgst": false, 00:11:58.590 "ddgst": false 00:11:58.590 }, 00:11:58.590 "method": "bdev_nvme_attach_controller" 00:11:58.590 }' 00:11:58.590 [2024-04-19 03:25:35.960948] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:58.590 [2024-04-19 03:25:35.960948] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:58.590 [2024-04-19 03:25:35.960948] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:58.590 [2024-04-19 03:25:35.960949] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:11:58.590 [2024-04-19 03:25:35.961042] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-19 03:25:35.961042] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-19 03:25:35.961042] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-19 03:25:35.961043] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:58.590 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:58.590 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:58.590 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:58.590 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.590 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.590 [2024-04-19 03:25:36.128871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.848 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.848 [2024-04-19 03:25:36.229157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:11:58.848 [2024-04-19 03:25:36.234017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.848 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.848 [2024-04-19 03:25:36.330837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:58.848 [2024-04-19 03:25:36.334766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.108 [2024-04-19 03:25:36.431485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:59.108 [2024-04-19 03:25:36.435469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.108 [2024-04-19 03:25:36.527898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:59.108 Running I/O for 1 seconds... 00:11:59.108 Running I/O for 1 seconds... 00:11:59.368 Running I/O for 1 seconds... 00:11:59.368 Running I/O for 1 seconds... 00:12:00.308 00:12:00.308 Latency(us) 00:12:00.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.308 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:00.308 Nvme1n1 : 1.01 12021.77 46.96 0.00 0.00 10612.73 5485.61 20583.16 00:12:00.308 =================================================================================================================== 00:12:00.308 Total : 12021.77 46.96 0.00 0.00 10612.73 5485.61 20583.16 00:12:00.308 00:12:00.308 Latency(us) 00:12:00.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.308 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:00.308 Nvme1n1 : 1.02 4627.84 18.08 0.00 0.00 27394.16 8786.68 38641.97 00:12:00.308 =================================================================================================================== 00:12:00.308 Total : 4627.84 18.08 0.00 0.00 27394.16 8786.68 38641.97 00:12:00.308 00:12:00.308 Latency(us) 00:12:00.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.308 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:00.308 Nvme1n1 : 1.00 205154.86 801.39 0.00 0.00 621.38 267.00 879.88 00:12:00.308 =================================================================================================================== 00:12:00.308 Total : 205154.86 801.39 0.00 0.00 621.38 267.00 879.88 00:12:00.308 00:12:00.308 Latency(us) 00:12:00.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.308 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:00.308 Nvme1n1 : 1.01 4893.89 19.12 0.00 0.00 25979.99 8786.68 57089.14 00:12:00.308 =================================================================================================================== 00:12:00.308 Total : 4893.89 19.12 0.00 0.00 25979.99 8786.68 57089.14 00:12:00.569 03:25:37 -- target/bdev_io_wait.sh@38 -- # wait 223660 00:12:00.569 03:25:37 -- target/bdev_io_wait.sh@39 -- # wait 223662 00:12:00.569 03:25:37 -- target/bdev_io_wait.sh@40 -- # wait 223664 00:12:00.569 03:25:37 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.569 03:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.569 03:25:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.569 03:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.569 03:25:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:00.569 03:25:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:00.569 03:25:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:00.569 03:25:38 -- nvmf/common.sh@117 -- # sync 00:12:00.569 03:25:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.569 03:25:38 -- nvmf/common.sh@120 -- # set +e 00:12:00.569 03:25:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.569 03:25:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.569 rmmod nvme_tcp 00:12:00.569 rmmod nvme_fabrics 00:12:00.569 rmmod nvme_keyring 00:12:00.569 03:25:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.569 03:25:38 -- nvmf/common.sh@124 -- # set -e 00:12:00.569 03:25:38 -- nvmf/common.sh@125 -- # return 0 00:12:00.569 03:25:38 -- nvmf/common.sh@478 -- # '[' -n 223630 ']' 00:12:00.569 03:25:38 -- nvmf/common.sh@479 -- # killprocess 223630 00:12:00.569 03:25:38 -- common/autotest_common.sh@936 -- # '[' -z 223630 ']' 00:12:00.569 03:25:38 -- common/autotest_common.sh@940 -- # kill -0 223630 00:12:00.569 03:25:38 -- common/autotest_common.sh@941 -- # uname 00:12:00.569 03:25:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.569 03:25:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 223630 00:12:00.569 03:25:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:00.569 03:25:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:00.569 03:25:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 223630' 00:12:00.569 killing process with pid 223630 00:12:00.569 03:25:38 -- common/autotest_common.sh@955 -- # kill 223630 00:12:00.569 03:25:38 -- common/autotest_common.sh@960 -- # wait 223630 00:12:00.829 03:25:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:00.829 03:25:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:00.829 03:25:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:00.829 03:25:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.829 03:25:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.829 03:25:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.829 03:25:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.829 03:25:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.380 03:25:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.380 00:12:03.380 real 0m7.162s 00:12:03.380 user 0m15.862s 00:12:03.380 sys 0m3.519s 00:12:03.380 03:25:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:03.380 03:25:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.380 ************************************ 00:12:03.380 END TEST nvmf_bdev_io_wait 00:12:03.380 ************************************ 00:12:03.380 03:25:40 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:03.380 03:25:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:03.380 03:25:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.380 03:25:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.380 ************************************ 00:12:03.380 START TEST nvmf_queue_depth 00:12:03.380 ************************************ 00:12:03.380 03:25:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:03.380 * Looking for test storage... 00:12:03.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.380 03:25:40 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.380 03:25:40 -- nvmf/common.sh@7 -- # uname -s 00:12:03.380 03:25:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.380 03:25:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.380 03:25:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.380 03:25:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.380 03:25:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.380 03:25:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.380 03:25:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.380 03:25:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.380 03:25:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.380 03:25:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.380 03:25:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.380 03:25:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.380 03:25:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.380 03:25:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.380 03:25:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.380 03:25:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.380 03:25:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.380 03:25:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.380 03:25:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.380 03:25:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.380 03:25:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.380 03:25:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.380 03:25:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.380 03:25:40 -- paths/export.sh@5 -- # export PATH 00:12:03.380 03:25:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.380 03:25:40 -- nvmf/common.sh@47 -- # : 0 00:12:03.380 03:25:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.380 03:25:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.380 03:25:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.380 03:25:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.380 03:25:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.380 03:25:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.380 03:25:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.380 03:25:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.380 03:25:40 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:03.380 03:25:40 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:03.380 03:25:40 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:03.380 03:25:40 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:03.380 03:25:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:03.380 03:25:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.380 03:25:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:03.380 03:25:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:03.380 03:25:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:03.380 03:25:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.380 03:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.380 03:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.380 03:25:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:03.380 03:25:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:03.380 03:25:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:03.380 03:25:40 -- common/autotest_common.sh@10 -- # set +x 00:12:05.292 03:25:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:05.292 03:25:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.292 03:25:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.292 03:25:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.292 03:25:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.292 03:25:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.292 03:25:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.292 03:25:42 -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.292 03:25:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.292 03:25:42 -- nvmf/common.sh@296 -- # e810=() 00:12:05.292 03:25:42 -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.292 03:25:42 -- nvmf/common.sh@297 -- # x722=() 00:12:05.292 03:25:42 -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.292 03:25:42 -- nvmf/common.sh@298 -- # mlx=() 00:12:05.292 03:25:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.292 03:25:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.292 03:25:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.292 03:25:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.292 03:25:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.292 03:25:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.292 03:25:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:05.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:05.292 03:25:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.292 03:25:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:05.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:05.292 03:25:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.292 03:25:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.292 03:25:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.292 03:25:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.292 03:25:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.292 03:25:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:05.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:05.292 03:25:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.292 03:25:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.292 03:25:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.292 03:25:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.292 03:25:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.292 03:25:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:05.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:05.292 03:25:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.292 03:25:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:05.292 03:25:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:05.292 03:25:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:05.292 03:25:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:05.292 03:25:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.292 03:25:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.292 03:25:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.292 03:25:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.292 03:25:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.292 03:25:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.292 03:25:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.292 03:25:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.292 03:25:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.293 03:25:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.293 03:25:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.293 03:25:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.293 03:25:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.293 03:25:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.293 03:25:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.293 03:25:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.293 03:25:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.293 03:25:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.293 03:25:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.293 03:25:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:12:05.293 00:12:05.293 --- 10.0.0.2 ping statistics --- 00:12:05.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.293 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:05.293 03:25:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:05.293 00:12:05.293 --- 10.0.0.1 ping statistics --- 00:12:05.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.293 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:05.293 03:25:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.293 03:25:42 -- nvmf/common.sh@411 -- # return 0 00:12:05.293 03:25:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:05.293 03:25:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.293 03:25:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:05.293 03:25:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:05.293 03:25:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.293 03:25:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:05.293 03:25:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:05.293 03:25:42 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:05.293 03:25:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:05.293 03:25:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:05.293 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.293 03:25:42 -- nvmf/common.sh@470 -- # nvmfpid=225884 00:12:05.293 03:25:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:05.293 03:25:42 -- nvmf/common.sh@471 -- # waitforlisten 225884 00:12:05.293 03:25:42 -- common/autotest_common.sh@817 -- # '[' -z 225884 ']' 00:12:05.293 03:25:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.293 03:25:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.293 03:25:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.293 03:25:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.293 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.293 [2024-04-19 03:25:42.607311] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:12:05.293 [2024-04-19 03:25:42.607415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.293 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.293 [2024-04-19 03:25:42.677284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.293 [2024-04-19 03:25:42.785917] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.293 [2024-04-19 03:25:42.785981] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.293 [2024-04-19 03:25:42.785994] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.293 [2024-04-19 03:25:42.786005] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.293 [2024-04-19 03:25:42.786015] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.293 [2024-04-19 03:25:42.786044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.553 03:25:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:05.553 03:25:42 -- common/autotest_common.sh@850 -- # return 0 00:12:05.553 03:25:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:05.553 03:25:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:05.553 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 03:25:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.553 03:25:42 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.553 03:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.553 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 [2024-04-19 03:25:42.936795] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.553 03:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.553 03:25:42 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:05.553 03:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.553 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 Malloc0 00:12:05.553 03:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.553 03:25:42 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.553 03:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.553 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 03:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.553 03:25:42 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.553 03:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.553 03:25:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 03:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.553 03:25:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.553 03:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.553 03:25:43 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 [2024-04-19 03:25:43.004816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.553 03:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.553 03:25:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=226026 00:12:05.553 03:25:43 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:05.553 03:25:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:05.553 03:25:43 -- target/queue_depth.sh@33 -- # waitforlisten 226026 /var/tmp/bdevperf.sock 00:12:05.553 03:25:43 -- common/autotest_common.sh@817 -- # '[' -z 226026 ']' 00:12:05.553 03:25:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:05.553 03:25:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.553 03:25:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:05.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:05.553 03:25:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.553 03:25:43 -- common/autotest_common.sh@10 -- # set +x 00:12:05.553 [2024-04-19 03:25:43.050762] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:12:05.553 [2024-04-19 03:25:43.050824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226026 ] 00:12:05.553 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.815 [2024-04-19 03:25:43.112655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.815 [2024-04-19 03:25:43.228556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.757 03:25:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:06.757 03:25:43 -- common/autotest_common.sh@850 -- # return 0 00:12:06.757 03:25:43 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:06.757 03:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.757 03:25:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.757 NVMe0n1 00:12:06.757 03:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.757 03:25:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:06.757 Running I/O for 10 seconds... 00:12:19.000 00:12:19.000 Latency(us) 00:12:19.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.000 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:19.000 Verification LBA range: start 0x0 length 0x4000 00:12:19.000 NVMe0n1 : 10.10 8298.08 32.41 0.00 0.00 122881.75 24272.59 76118.85 00:12:19.000 =================================================================================================================== 00:12:19.000 Total : 8298.08 32.41 0.00 0.00 122881.75 24272.59 76118.85 00:12:19.000 0 00:12:19.000 03:25:54 -- target/queue_depth.sh@39 -- # killprocess 226026 00:12:19.000 03:25:54 -- common/autotest_common.sh@936 -- # '[' -z 226026 ']' 00:12:19.000 03:25:54 -- common/autotest_common.sh@940 -- # kill -0 226026 00:12:19.000 03:25:54 -- common/autotest_common.sh@941 -- # uname 00:12:19.000 03:25:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.000 03:25:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 226026 00:12:19.000 03:25:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.000 03:25:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.000 03:25:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 226026' 00:12:19.000 killing process with pid 226026 00:12:19.000 03:25:54 -- common/autotest_common.sh@955 -- # kill 226026 00:12:19.000 Received shutdown signal, test time was about 10.000000 seconds 00:12:19.000 00:12:19.000 Latency(us) 00:12:19.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.000 =================================================================================================================== 00:12:19.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.000 03:25:54 -- common/autotest_common.sh@960 -- # wait 226026 00:12:19.000 03:25:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:19.000 03:25:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:19.000 03:25:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:19.000 03:25:54 -- nvmf/common.sh@117 -- # sync 00:12:19.000 03:25:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.000 03:25:54 -- nvmf/common.sh@120 -- # set +e 00:12:19.000 03:25:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.000 03:25:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.000 rmmod nvme_tcp 00:12:19.000 rmmod nvme_fabrics 00:12:19.000 rmmod nvme_keyring 00:12:19.000 03:25:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.000 03:25:54 -- nvmf/common.sh@124 -- # set -e 00:12:19.000 03:25:54 -- nvmf/common.sh@125 -- # return 0 00:12:19.000 03:25:54 -- nvmf/common.sh@478 -- # '[' -n 225884 ']' 00:12:19.000 03:25:54 -- nvmf/common.sh@479 -- # killprocess 225884 00:12:19.000 03:25:54 -- common/autotest_common.sh@936 -- # '[' -z 225884 ']' 00:12:19.000 03:25:54 -- common/autotest_common.sh@940 -- # kill -0 225884 00:12:19.000 03:25:54 -- common/autotest_common.sh@941 -- # uname 00:12:19.000 03:25:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.000 03:25:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 225884 00:12:19.000 03:25:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:19.000 03:25:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:19.000 03:25:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 225884' 00:12:19.000 killing process with pid 225884 00:12:19.000 03:25:54 -- common/autotest_common.sh@955 -- # kill 225884 00:12:19.000 03:25:54 -- common/autotest_common.sh@960 -- # wait 225884 00:12:19.000 03:25:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:19.000 03:25:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:19.000 03:25:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:19.000 03:25:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.000 03:25:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.000 03:25:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.000 03:25:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.000 03:25:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.567 03:25:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.567 00:12:19.567 real 0m16.596s 00:12:19.567 user 0m24.113s 00:12:19.567 sys 0m2.901s 00:12:19.567 03:25:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.567 03:25:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.567 ************************************ 00:12:19.567 END TEST nvmf_queue_depth 00:12:19.567 ************************************ 00:12:19.826 03:25:57 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:19.826 03:25:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:19.826 03:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.826 03:25:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.826 ************************************ 00:12:19.826 START TEST nvmf_multipath 00:12:19.826 ************************************ 00:12:19.826 03:25:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:19.826 * Looking for test storage... 00:12:19.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.826 03:25:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.826 03:25:57 -- nvmf/common.sh@7 -- # uname -s 00:12:19.826 03:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.826 03:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.826 03:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.826 03:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.826 03:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.826 03:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.826 03:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.826 03:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.826 03:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.826 03:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.826 03:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.826 03:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.826 03:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.826 03:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.826 03:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.826 03:25:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.826 03:25:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.826 03:25:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.826 03:25:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.826 03:25:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.827 03:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.827 03:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.827 03:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.827 03:25:57 -- paths/export.sh@5 -- # export PATH 00:12:19.827 03:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.827 03:25:57 -- nvmf/common.sh@47 -- # : 0 00:12:19.827 03:25:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.827 03:25:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.827 03:25:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.827 03:25:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.827 03:25:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.827 03:25:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.827 03:25:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.827 03:25:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.827 03:25:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.827 03:25:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.827 03:25:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:19.827 03:25:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.827 03:25:57 -- target/multipath.sh@43 -- # nvmftestinit 00:12:19.827 03:25:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:19.827 03:25:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.827 03:25:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:19.827 03:25:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:19.827 03:25:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:19.827 03:25:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.827 03:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.827 03:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.827 03:25:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:19.827 03:25:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:19.827 03:25:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.827 03:25:57 -- common/autotest_common.sh@10 -- # set +x 00:12:22.362 03:25:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:22.362 03:25:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.362 03:25:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.362 03:25:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.362 03:25:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.363 03:25:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.363 03:25:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.363 03:25:59 -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.363 03:25:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.363 03:25:59 -- nvmf/common.sh@296 -- # e810=() 00:12:22.363 03:25:59 -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.363 03:25:59 -- nvmf/common.sh@297 -- # x722=() 00:12:22.363 03:25:59 -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.363 03:25:59 -- nvmf/common.sh@298 -- # mlx=() 00:12:22.363 03:25:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.363 03:25:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.363 03:25:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.363 03:25:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.363 03:25:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.363 03:25:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.363 03:25:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:22.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:22.363 03:25:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.363 03:25:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:22.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:22.363 03:25:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.363 03:25:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.363 03:25:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.363 03:25:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.363 03:25:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.363 03:25:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:22.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:22.363 03:25:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.363 03:25:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.363 03:25:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.363 03:25:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.363 03:25:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.363 03:25:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:22.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:22.363 03:25:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.363 03:25:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:22.363 03:25:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:22.363 03:25:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:22.363 03:25:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.363 03:25:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.363 03:25:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.363 03:25:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.363 03:25:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.363 03:25:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.363 03:25:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.363 03:25:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.363 03:25:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.363 03:25:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.363 03:25:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.363 03:25:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.363 03:25:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.363 03:25:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.363 03:25:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.363 03:25:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.363 03:25:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.363 03:25:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.363 03:25:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.363 03:25:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:12:22.363 00:12:22.363 --- 10.0.0.2 ping statistics --- 00:12:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.363 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:12:22.363 03:25:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:22.363 00:12:22.363 --- 10.0.0.1 ping statistics --- 00:12:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.363 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:22.363 03:25:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.363 03:25:59 -- nvmf/common.sh@411 -- # return 0 00:12:22.363 03:25:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:22.363 03:25:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.363 03:25:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.363 03:25:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:22.363 03:25:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:22.363 03:25:59 -- target/multipath.sh@45 -- # '[' -z ']' 00:12:22.363 03:25:59 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:22.363 only one NIC for nvmf test 00:12:22.363 03:25:59 -- target/multipath.sh@47 -- # nvmftestfini 00:12:22.363 03:25:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:22.363 03:25:59 -- nvmf/common.sh@117 -- # sync 00:12:22.363 03:25:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.363 03:25:59 -- nvmf/common.sh@120 -- # set +e 00:12:22.363 03:25:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.363 03:25:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.363 rmmod nvme_tcp 00:12:22.363 rmmod nvme_fabrics 00:12:22.363 rmmod nvme_keyring 00:12:22.363 03:25:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.363 03:25:59 -- nvmf/common.sh@124 -- # set -e 00:12:22.363 03:25:59 -- nvmf/common.sh@125 -- # return 0 00:12:22.363 03:25:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:22.363 03:25:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:22.363 03:25:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:22.363 03:25:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.363 03:25:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.363 03:25:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.363 03:25:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.363 03:25:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.271 03:26:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.271 03:26:01 -- target/multipath.sh@48 -- # exit 0 00:12:24.271 03:26:01 -- target/multipath.sh@1 -- # nvmftestfini 00:12:24.271 03:26:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:24.271 03:26:01 -- nvmf/common.sh@117 -- # sync 00:12:24.271 03:26:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@120 -- # set +e 00:12:24.271 03:26:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.271 03:26:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.271 03:26:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.271 03:26:01 -- nvmf/common.sh@124 -- # set -e 00:12:24.271 03:26:01 -- nvmf/common.sh@125 -- # return 0 00:12:24.271 03:26:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:24.271 03:26:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:24.271 03:26:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.271 03:26:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.271 03:26:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.271 03:26:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.271 03:26:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.271 03:26:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.271 00:12:24.271 real 0m4.380s 00:12:24.271 user 0m0.815s 00:12:24.271 sys 0m1.566s 00:12:24.271 03:26:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:24.271 03:26:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.271 ************************************ 00:12:24.271 END TEST nvmf_multipath 00:12:24.271 ************************************ 00:12:24.271 03:26:01 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:24.271 03:26:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:24.271 03:26:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.271 03:26:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.271 ************************************ 00:12:24.271 START TEST nvmf_zcopy 00:12:24.271 ************************************ 00:12:24.271 03:26:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:24.271 * Looking for test storage... 00:12:24.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.271 03:26:01 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.271 03:26:01 -- nvmf/common.sh@7 -- # uname -s 00:12:24.271 03:26:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.271 03:26:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.271 03:26:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.271 03:26:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.271 03:26:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.271 03:26:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.271 03:26:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.271 03:26:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.271 03:26:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.271 03:26:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.271 03:26:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.271 03:26:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.271 03:26:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.271 03:26:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.271 03:26:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.271 03:26:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.271 03:26:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.271 03:26:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.271 03:26:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.271 03:26:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.271 03:26:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.271 03:26:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.271 03:26:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.271 03:26:01 -- paths/export.sh@5 -- # export PATH 00:12:24.271 03:26:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.271 03:26:01 -- nvmf/common.sh@47 -- # : 0 00:12:24.271 03:26:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.271 03:26:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.271 03:26:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.271 03:26:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.271 03:26:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.271 03:26:01 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:24.271 03:26:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:24.271 03:26:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.271 03:26:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:24.271 03:26:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:24.271 03:26:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:24.271 03:26:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.271 03:26:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.271 03:26:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.271 03:26:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:24.271 03:26:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:24.271 03:26:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.271 03:26:01 -- common/autotest_common.sh@10 -- # set +x 00:12:26.178 03:26:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:26.178 03:26:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.178 03:26:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.178 03:26:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.178 03:26:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.178 03:26:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.178 03:26:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.178 03:26:03 -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.178 03:26:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.178 03:26:03 -- nvmf/common.sh@296 -- # e810=() 00:12:26.178 03:26:03 -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.178 03:26:03 -- nvmf/common.sh@297 -- # x722=() 00:12:26.178 03:26:03 -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.178 03:26:03 -- nvmf/common.sh@298 -- # mlx=() 00:12:26.178 03:26:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.178 03:26:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.178 03:26:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.178 03:26:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.178 03:26:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.178 03:26:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.178 03:26:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.178 03:26:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.178 03:26:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.178 03:26:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.178 03:26:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.178 03:26:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.178 03:26:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.178 03:26:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:26.178 03:26:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.178 03:26:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.178 03:26:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.178 03:26:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.178 03:26:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.178 03:26:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:26.178 03:26:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.178 03:26:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.437 03:26:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.437 03:26:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:26.437 03:26:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:26.437 03:26:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:26.437 03:26:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:26.437 03:26:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:26.437 03:26:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.437 03:26:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.437 03:26:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.437 03:26:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.437 03:26:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.437 03:26:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.437 03:26:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.437 03:26:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.437 03:26:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.437 03:26:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.437 03:26:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.437 03:26:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.437 03:26:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.437 03:26:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.437 03:26:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.437 03:26:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.437 03:26:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.437 03:26:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.437 03:26:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.437 03:26:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:12:26.437 00:12:26.437 --- 10.0.0.2 ping statistics --- 00:12:26.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.437 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:26.437 03:26:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:26.438 00:12:26.438 --- 10.0.0.1 ping statistics --- 00:12:26.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.438 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:26.438 03:26:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.438 03:26:03 -- nvmf/common.sh@411 -- # return 0 00:12:26.438 03:26:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:26.438 03:26:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.438 03:26:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:26.438 03:26:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:26.438 03:26:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.438 03:26:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:26.438 03:26:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:26.438 03:26:03 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:26.438 03:26:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:26.438 03:26:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:26.438 03:26:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.438 03:26:03 -- nvmf/common.sh@470 -- # nvmfpid=231233 00:12:26.438 03:26:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:26.438 03:26:03 -- nvmf/common.sh@471 -- # waitforlisten 231233 00:12:26.438 03:26:03 -- common/autotest_common.sh@817 -- # '[' -z 231233 ']' 00:12:26.438 03:26:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.438 03:26:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:26.438 03:26:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.438 03:26:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:26.438 03:26:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.438 [2024-04-19 03:26:03.933264] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:12:26.438 [2024-04-19 03:26:03.933344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.438 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.697 [2024-04-19 03:26:03.998605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.697 [2024-04-19 03:26:04.103611] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.697 [2024-04-19 03:26:04.103667] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.697 [2024-04-19 03:26:04.103691] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.697 [2024-04-19 03:26:04.103702] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.697 [2024-04-19 03:26:04.103712] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.697 [2024-04-19 03:26:04.103739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.697 03:26:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:26.697 03:26:04 -- common/autotest_common.sh@850 -- # return 0 00:12:26.697 03:26:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:26.697 03:26:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:26.697 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.697 03:26:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.697 03:26:04 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:26.697 03:26:04 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:26.697 03:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.697 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.697 [2024-04-19 03:26:04.243044] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.697 03:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.697 03:26:04 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:26.697 03:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.697 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.697 03:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.697 03:26:04 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.697 03:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.697 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.959 [2024-04-19 03:26:04.259268] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.959 03:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.959 03:26:04 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.959 03:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.959 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.959 03:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.959 03:26:04 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:26.959 03:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.959 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.959 malloc0 00:12:26.959 03:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.959 03:26:04 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:26.959 03:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.959 03:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.959 03:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.959 03:26:04 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:26.959 03:26:04 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:26.959 03:26:04 -- nvmf/common.sh@521 -- # config=() 00:12:26.959 03:26:04 -- nvmf/common.sh@521 -- # local subsystem config 00:12:26.959 03:26:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:26.959 03:26:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:26.959 { 00:12:26.959 "params": { 00:12:26.959 "name": "Nvme$subsystem", 00:12:26.959 "trtype": "$TEST_TRANSPORT", 00:12:26.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:26.959 "adrfam": "ipv4", 00:12:26.959 "trsvcid": "$NVMF_PORT", 00:12:26.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:26.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:26.959 "hdgst": ${hdgst:-false}, 00:12:26.959 "ddgst": ${ddgst:-false} 00:12:26.959 }, 00:12:26.959 "method": "bdev_nvme_attach_controller" 00:12:26.959 } 00:12:26.959 EOF 00:12:26.959 )") 00:12:26.959 03:26:04 -- nvmf/common.sh@543 -- # cat 00:12:26.959 03:26:04 -- nvmf/common.sh@545 -- # jq . 00:12:26.959 03:26:04 -- nvmf/common.sh@546 -- # IFS=, 00:12:26.959 03:26:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:26.959 "params": { 00:12:26.959 "name": "Nvme1", 00:12:26.959 "trtype": "tcp", 00:12:26.959 "traddr": "10.0.0.2", 00:12:26.959 "adrfam": "ipv4", 00:12:26.959 "trsvcid": "4420", 00:12:26.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:26.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:26.959 "hdgst": false, 00:12:26.959 "ddgst": false 00:12:26.959 }, 00:12:26.959 "method": "bdev_nvme_attach_controller" 00:12:26.959 }' 00:12:26.959 [2024-04-19 03:26:04.338082] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:12:26.959 [2024-04-19 03:26:04.338152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231262 ] 00:12:26.959 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.959 [2024-04-19 03:26:04.399543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.220 [2024-04-19 03:26:04.522591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.479 Running I/O for 10 seconds... 00:12:37.466 00:12:37.466 Latency(us) 00:12:37.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.466 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:37.466 Verification LBA range: start 0x0 length 0x1000 00:12:37.466 Nvme1n1 : 10.02 5605.94 43.80 0.00 0.00 22772.03 2427.26 31845.64 00:12:37.466 =================================================================================================================== 00:12:37.466 Total : 5605.94 43.80 0.00 0.00 22772.03 2427.26 31845.64 00:12:37.727 03:26:15 -- target/zcopy.sh@39 -- # perfpid=232565 00:12:37.727 03:26:15 -- target/zcopy.sh@41 -- # xtrace_disable 00:12:37.727 03:26:15 -- common/autotest_common.sh@10 -- # set +x 00:12:37.727 03:26:15 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:37.727 03:26:15 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:37.727 03:26:15 -- nvmf/common.sh@521 -- # config=() 00:12:37.727 03:26:15 -- nvmf/common.sh@521 -- # local subsystem config 00:12:37.727 03:26:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:37.727 03:26:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:37.727 { 00:12:37.727 "params": { 00:12:37.727 "name": "Nvme$subsystem", 00:12:37.727 "trtype": "$TEST_TRANSPORT", 00:12:37.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.727 "adrfam": "ipv4", 00:12:37.727 "trsvcid": "$NVMF_PORT", 00:12:37.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.727 "hdgst": ${hdgst:-false}, 00:12:37.727 "ddgst": ${ddgst:-false} 00:12:37.727 }, 00:12:37.727 "method": "bdev_nvme_attach_controller" 00:12:37.727 } 00:12:37.727 EOF 00:12:37.727 )") 00:12:37.727 03:26:15 -- nvmf/common.sh@543 -- # cat 00:12:37.727 [2024-04-19 03:26:15.157290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.157341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 03:26:15 -- nvmf/common.sh@545 -- # jq . 00:12:37.727 03:26:15 -- nvmf/common.sh@546 -- # IFS=, 00:12:37.727 03:26:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:37.727 "params": { 00:12:37.727 "name": "Nvme1", 00:12:37.727 "trtype": "tcp", 00:12:37.727 "traddr": "10.0.0.2", 00:12:37.727 "adrfam": "ipv4", 00:12:37.727 "trsvcid": "4420", 00:12:37.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.727 "hdgst": false, 00:12:37.727 "ddgst": false 00:12:37.727 }, 00:12:37.727 "method": "bdev_nvme_attach_controller" 00:12:37.727 }' 00:12:37.727 [2024-04-19 03:26:15.165250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.165285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.173269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.173294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.181283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.181306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.189300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.189321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.196955] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:12:37.727 [2024-04-19 03:26:15.197035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232565 ] 00:12:37.727 [2024-04-19 03:26:15.197336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.197360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.205357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.205389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.213387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.213412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.221411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.221447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.229443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.229463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.727 [2024-04-19 03:26:15.237471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.237492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.245479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.245501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.253504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.253524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.261504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.261526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.265104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.727 [2024-04-19 03:26:15.269541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.269567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.727 [2024-04-19 03:26:15.277583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.727 [2024-04-19 03:26:15.277618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.987 [2024-04-19 03:26:15.285573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.987 [2024-04-19 03:26:15.285611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.987 [2024-04-19 03:26:15.293593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.987 [2024-04-19 03:26:15.293615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.987 [2024-04-19 03:26:15.301614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.987 [2024-04-19 03:26:15.301646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.987 [2024-04-19 03:26:15.309637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.987 [2024-04-19 03:26:15.309674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.987 [2024-04-19 03:26:15.317678] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.987 [2024-04-19 03:26:15.317703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.987 [2024-04-19 03:26:15.325703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.987 [2024-04-19 03:26:15.325729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.333760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.333799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.341756] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.341783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.349777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.349804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.357800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.357826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.365824] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.365849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.373844] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.373869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.381866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.381890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.383362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.988 [2024-04-19 03:26:15.389888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.389913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.397921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.397950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.405963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.406015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.413986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.414026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.422010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.422053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.430037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.430079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.438057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.438099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.446091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.446148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.454068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.454093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.462122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.462163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.470144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.470187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.478137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.478164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.486153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.486177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.494182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.494204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.502202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.502230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.510210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.510235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.518231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.518254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.526256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.526279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.534275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.534297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.988 [2024-04-19 03:26:15.542296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.988 [2024-04-19 03:26:15.542318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.550317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.550340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.558341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.558377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.566378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.566411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.574411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.574452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.582435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.582460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.590455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.590479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.598467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.598499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.606487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.606510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.614510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.614532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.622599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.622625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.630616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.630640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.638637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.638659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.646658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.646693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.654694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.654715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.662718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.662741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.670736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.670758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.678772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.678793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.686776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.686797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.694798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.694819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.702820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.702841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.710844] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.710867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.718868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.718889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.726897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.726922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 Running I/O for 5 seconds... 00:12:38.248 [2024-04-19 03:26:15.734919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.734942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.748147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.748177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.759559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.759595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.771987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.772015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.783978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.784006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.248 [2024-04-19 03:26:15.795930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.248 [2024-04-19 03:26:15.795959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.509 [2024-04-19 03:26:15.808785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.509 [2024-04-19 03:26:15.808828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.509 [2024-04-19 03:26:15.821438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.509 [2024-04-19 03:26:15.821467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.509 [2024-04-19 03:26:15.834975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.509 [2024-04-19 03:26:15.835003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.846548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.846576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.858620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.858648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.870623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.870650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.882662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.882705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.894486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.894515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.907082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.907110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.919495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.919523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.931319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.931347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.944177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.944205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.957054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.957086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.970038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.970070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.983204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.983245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:15.996341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:15.996372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:16.009322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:16.009354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:16.022951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:16.022978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:16.036172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:16.036200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:16.049309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:16.049338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.510 [2024-04-19 03:26:16.062061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.510 [2024-04-19 03:26:16.062088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.801 [2024-04-19 03:26:16.074664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.801 [2024-04-19 03:26:16.074693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.801 [2024-04-19 03:26:16.086508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.801 [2024-04-19 03:26:16.086536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.801 [2024-04-19 03:26:16.099501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.801 [2024-04-19 03:26:16.099528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.801 [2024-04-19 03:26:16.112277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.801 [2024-04-19 03:26:16.112307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.801 [2024-04-19 03:26:16.124875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.801 [2024-04-19 03:26:16.124918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.801 [2024-04-19 03:26:16.138102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.801 [2024-04-19 03:26:16.138132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.151427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.151454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.163925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.163956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.176975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.177005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.189684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.189711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.202783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.202813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.216038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.216064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.229121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.229151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.242218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.242249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.254867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.254893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.267421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.267447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.279669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.279710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.291797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.291823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.304223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.304249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.316585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.316616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.802 [2024-04-19 03:26:16.329810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.802 [2024-04-19 03:26:16.329841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.342912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.342940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.356016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.356043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.369253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.369279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.381856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.381886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.394801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.394831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.407046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.407076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.420121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.420152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.433192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.433222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.445992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.446022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.459315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.459346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.472134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.472164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.485268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.485294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.497469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.497501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.510264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.510290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.522682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.522723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.535213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.535244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.548432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.548458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.561295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.561321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.574376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.574421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.587300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.062 [2024-04-19 03:26:16.587326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.062 [2024-04-19 03:26:16.599937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.063 [2024-04-19 03:26:16.599968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.063 [2024-04-19 03:26:16.612523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.063 [2024-04-19 03:26:16.612550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.625275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.625306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.638360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.638400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.651613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.651640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.664970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.665000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.678121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.678151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.691344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.691370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.704160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.704190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.717720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.717746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.730631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.730658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.743083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.743113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.755283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.755309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.768278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.768308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.781251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.781282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.794132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.794162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.806877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.806907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.819948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.819978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.833067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.833098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.846039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.846070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.858643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.858669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.323 [2024-04-19 03:26:16.871815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.323 [2024-04-19 03:26:16.871859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.884746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.884773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.897895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.897925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.910664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.910707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.923493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.923519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.936368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.936419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.948789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.948818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.961273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.961307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.973252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.973278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.985845] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.985871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:16.998152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:16.998178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.010159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.010187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.023699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.023726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.035109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.035135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.047210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.047236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.059002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.059029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.071237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.071263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.083595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.083622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.095284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.095311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.106745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.106772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.119105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.119132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.585 [2024-04-19 03:26:17.131240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.585 [2024-04-19 03:26:17.131267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.143496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.143526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.155555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.155583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.168049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.168076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.180448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.180476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.192168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.192204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.204358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.204407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.216284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.216311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.229891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.229918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.241363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.241412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.253952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.253979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.266452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.266479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.278647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.278675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.291198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.291224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.303025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.303051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.314795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.314821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.327242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.327268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.339742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.339769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.351996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.352023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.364781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.364809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.377045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.377072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.389558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.389586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.846 [2024-04-19 03:26:17.401979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.846 [2024-04-19 03:26:17.402007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.414445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.414474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.426078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.426116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.439626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.439653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.450655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.450699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.463528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.463555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.476655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.476681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.489139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.489165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.501416] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.501444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.513566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.513593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.526476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.526504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.539543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.539574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.552035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.552061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.564330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.564355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.576532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.576558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.588763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.588789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.601021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.601047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.613665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.613692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.626164] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.626189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.638452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.638479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.651134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.651160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.105 [2024-04-19 03:26:17.663498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.105 [2024-04-19 03:26:17.663534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.676133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.676159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.688672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.688713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.701088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.701113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.713542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.713572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.726278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.726309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.739222] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.739253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.751972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.751998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.764424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.764460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.777131] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.777162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.789408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.789445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.802694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.802725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.815552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.815579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.365 [2024-04-19 03:26:17.828451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.365 [2024-04-19 03:26:17.828479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.840936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.840966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.853859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.853889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.867049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.867074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.880225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.880250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.892894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.892919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.905638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.905665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.366 [2024-04-19 03:26:17.918275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.366 [2024-04-19 03:26:17.918316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.625 [2024-04-19 03:26:17.931212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.625 [2024-04-19 03:26:17.931243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.625 [2024-04-19 03:26:17.944023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.625 [2024-04-19 03:26:17.944048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.625 [2024-04-19 03:26:17.956566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:17.956592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:17.968990] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:17.969016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:17.981309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:17.981334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:17.993754] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:17.993780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.006833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.006880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.019703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.019730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.032035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.032060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.044241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.044271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.056887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.056913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.069406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.069432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.081647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.081674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.094107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.094137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.107025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.107053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.119706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.119732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.132332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.132358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.145150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.145175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.157223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.157249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.169799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.169829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.626 [2024-04-19 03:26:18.182474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.626 [2024-04-19 03:26:18.182502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.195514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.195544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.208051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.208081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.221302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.221328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.234458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.234485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.247720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.247746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.260661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.260693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.273672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.273703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.286643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.286669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.299432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.299459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.312011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.312037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.324311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.324338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.337070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.337096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.350312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.350337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.363116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.363143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.375765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.375797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.388736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.388767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.401523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.401550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.414706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.414734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.427345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.427370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.887 [2024-04-19 03:26:18.440043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.887 [2024-04-19 03:26:18.440073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.452280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.452308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.464839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.464865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.477428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.477461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.490132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.490159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.502496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.502526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.514711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.514738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.526500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.526528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.538044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.538070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.550337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.550378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.562773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.562799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.574917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.574944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.587265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.587291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.598829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.598856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.610601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.610629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.622937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.622963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.634901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.634927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.649002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.649029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.660845] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.660871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.674622] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.674648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.685593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.685620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.147 [2024-04-19 03:26:18.697768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.147 [2024-04-19 03:26:18.697795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.406 [2024-04-19 03:26:18.710073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.406 [2024-04-19 03:26:18.710101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.406 [2024-04-19 03:26:18.722542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.406 [2024-04-19 03:26:18.722569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.406 [2024-04-19 03:26:18.734673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.406 [2024-04-19 03:26:18.734715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.406 [2024-04-19 03:26:18.746626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.406 [2024-04-19 03:26:18.746653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.406 [2024-04-19 03:26:18.758691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.758732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.770509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.770536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.782258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.782285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.793577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.793605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.805857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.805884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.817466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.817493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.829234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.829275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.843025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.843066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.853858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.853884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.866152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.866179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.878833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.878860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.891025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.891052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.902851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.902877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.914558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.914586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.927007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.927033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.940082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.940108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.407 [2024-04-19 03:26:18.953887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.407 [2024-04-19 03:26:18.953913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:18.965632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:18.965664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:18.979080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:18.979110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:18.991733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:18.991763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.004501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.004532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.017024] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.017054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.030229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.030259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.042957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.042987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.056420] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.056463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.069486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.069513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.082609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.082650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.095173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.095198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.107585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.107612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.119862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.119887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.132061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.132091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.144347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.144397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.157015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.157041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.169089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.169114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.181987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.182017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.194550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.194578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.207842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.207868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.667 [2024-04-19 03:26:19.220313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.667 [2024-04-19 03:26:19.220344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.926 [2024-04-19 03:26:19.233683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.926 [2024-04-19 03:26:19.233714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.926 [2024-04-19 03:26:19.246051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.926 [2024-04-19 03:26:19.246077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.926 [2024-04-19 03:26:19.258997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.926 [2024-04-19 03:26:19.259022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.926 [2024-04-19 03:26:19.271326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.271352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.284006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.284031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.297015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.297040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.310152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.310183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.323134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.323174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.336237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.336268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.349174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.349200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.361776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.361802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.374463] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.374490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.386789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.386821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.399250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.399282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.412134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.412165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.424289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.424320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.436846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.436890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.448980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.449005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.461155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.461181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.927 [2024-04-19 03:26:19.473042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.927 [2024-04-19 03:26:19.473068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.186 [2024-04-19 03:26:19.486088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.186 [2024-04-19 03:26:19.486120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.186 [2024-04-19 03:26:19.498852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.186 [2024-04-19 03:26:19.498882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.186 [2024-04-19 03:26:19.512364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.186 [2024-04-19 03:26:19.512413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.186 [2024-04-19 03:26:19.525421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.186 [2024-04-19 03:26:19.525469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.186 [2024-04-19 03:26:19.538909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.186 [2024-04-19 03:26:19.538939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.186 [2024-04-19 03:26:19.552214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.186 [2024-04-19 03:26:19.552245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.565098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.565137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.577793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.577819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.589912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.589954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.602763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.602793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.615283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.615309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.628148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.628178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.640790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.640820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.653650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.653676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.665952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.665983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.679323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.679353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.692135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.692165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.704952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.704983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.717926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.717957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.729851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.729876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.187 [2024-04-19 03:26:19.743067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.187 [2024-04-19 03:26:19.743097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.756136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.756168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.768932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.768962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.781997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.782027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.795048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.795078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.807449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.807476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.820012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.820038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.832593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.832620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.845155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.845185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.857613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.857641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.870326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.870351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.883008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.883033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.895703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.895743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.908703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.908733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.922004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.922034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.934581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.934609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.947589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.947618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.960934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.960965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.973690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.973718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.985946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.985976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.446 [2024-04-19 03:26:19.998239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.446 [2024-04-19 03:26:19.998269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.704 [2024-04-19 03:26:20.011529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.704 [2024-04-19 03:26:20.011560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.704 [2024-04-19 03:26:20.024940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.704 [2024-04-19 03:26:20.024971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.704 [2024-04-19 03:26:20.037002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.704 [2024-04-19 03:26:20.037029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.704 [2024-04-19 03:26:20.050075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.704 [2024-04-19 03:26:20.050102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.704 [2024-04-19 03:26:20.063019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.704 [2024-04-19 03:26:20.063044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.704 [2024-04-19 03:26:20.075330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.075355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.087918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.087944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.100517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.100543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.112980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.113022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.125590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.125618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.138762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.138788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.150645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.150673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.163299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.163324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.175293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.175334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.187636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.187678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.200188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.200214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.212484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.212525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.225202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.225241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.238265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.238290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.705 [2024-04-19 03:26:20.250605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.705 [2024-04-19 03:26:20.250631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.263757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.263799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.275955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.275981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.288740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.288766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.300909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.300935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.313752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.313778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.326576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.326603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.338528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.338555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.350871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.350897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.363617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.363644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.375792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.375817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.388491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.388519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.401182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.401209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.413489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.413518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.426071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.426101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.438625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.438652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.451549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.451576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.463923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.463948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.476023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.476048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.488706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.488732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.500818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.500843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.963 [2024-04-19 03:26:20.513704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.963 [2024-04-19 03:26:20.513730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.526108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.526140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.539010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.539041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.555064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.555098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.567229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.567260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.580113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.580144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.593230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.593261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.606283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.606314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.619361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.619412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.632051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.632078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.644760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.644786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.658022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.658052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.671051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.671077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.683732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.683758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.697159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.697189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.709780] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.709806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.722300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.722330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.735494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.735521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.748320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.748350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.757215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.757256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 00:12:43.224 Latency(us) 00:12:43.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.224 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:43.224 Nvme1n1 : 5.01 10088.16 78.81 0.00 0.00 12670.48 5558.42 23690.05 00:12:43.224 =================================================================================================================== 00:12:43.224 Total : 10088.16 78.81 0.00 0.00 12670.48 5558.42 23690.05 00:12:43.224 [2024-04-19 03:26:20.763034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.763063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.771052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.771081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.224 [2024-04-19 03:26:20.779069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.224 [2024-04-19 03:26:20.779095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.787151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.787202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.795162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.795213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.803176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.803226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.811194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.811243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.819219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.819268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.827247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.827296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.835262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.835312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.843297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.843345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.851317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.851367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.859357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.859434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.867378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.867452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.875406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.875458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.883435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.883502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.891452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.891502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.899480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.899528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.907482] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.907520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.915465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.915486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.923478] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.923498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.931496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.931516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.939513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.939533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.947602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.947650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.955612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.955658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.963622] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.963678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.971601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.971621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.979626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.979648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.987644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.987682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:20.995684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:20.995708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:21.003752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:21.003800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:21.011770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:21.011816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:21.019793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:21.019836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:21.027775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:21.027799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.485 [2024-04-19 03:26:21.035795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.485 [2024-04-19 03:26:21.035830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.745 [2024-04-19 03:26:21.043821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.745 [2024-04-19 03:26:21.043847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (232565) - No such process 00:12:43.745 03:26:21 -- target/zcopy.sh@49 -- # wait 232565 00:12:43.745 03:26:21 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.745 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.745 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.745 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.745 03:26:21 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:43.745 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.746 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.746 delay0 00:12:43.746 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.746 03:26:21 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:43.746 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.746 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.746 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.746 03:26:21 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:43.746 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.746 [2024-04-19 03:26:21.124936] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:50.322 Initializing NVMe Controllers 00:12:50.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:50.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:50.322 Initialization complete. Launching workers. 00:12:50.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:12:50.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 38 00:12:50.322 success 190, unsuccess 201, failed 0 00:12:50.322 03:26:27 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:50.322 03:26:27 -- target/zcopy.sh@60 -- # nvmftestfini 00:12:50.322 03:26:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:50.322 03:26:27 -- nvmf/common.sh@117 -- # sync 00:12:50.322 03:26:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.322 03:26:27 -- nvmf/common.sh@120 -- # set +e 00:12:50.322 03:26:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.322 03:26:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.322 rmmod nvme_tcp 00:12:50.322 rmmod nvme_fabrics 00:12:50.322 rmmod nvme_keyring 00:12:50.322 03:26:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.322 03:26:27 -- nvmf/common.sh@124 -- # set -e 00:12:50.322 03:26:27 -- nvmf/common.sh@125 -- # return 0 00:12:50.322 03:26:27 -- nvmf/common.sh@478 -- # '[' -n 231233 ']' 00:12:50.322 03:26:27 -- nvmf/common.sh@479 -- # killprocess 231233 00:12:50.322 03:26:27 -- common/autotest_common.sh@936 -- # '[' -z 231233 ']' 00:12:50.322 03:26:27 -- common/autotest_common.sh@940 -- # kill -0 231233 00:12:50.322 03:26:27 -- common/autotest_common.sh@941 -- # uname 00:12:50.322 03:26:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.322 03:26:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 231233 00:12:50.322 03:26:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:50.322 03:26:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:50.322 03:26:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 231233' 00:12:50.322 killing process with pid 231233 00:12:50.322 03:26:27 -- common/autotest_common.sh@955 -- # kill 231233 00:12:50.322 03:26:27 -- common/autotest_common.sh@960 -- # wait 231233 00:12:50.322 03:26:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:50.322 03:26:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:50.322 03:26:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:50.322 03:26:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.322 03:26:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.322 03:26:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.322 03:26:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.322 03:26:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.228 03:26:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.228 00:12:52.228 real 0m27.866s 00:12:52.228 user 0m40.559s 00:12:52.228 sys 0m8.583s 00:12:52.228 03:26:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.228 03:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:52.228 ************************************ 00:12:52.228 END TEST nvmf_zcopy 00:12:52.228 ************************************ 00:12:52.228 03:26:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:52.228 03:26:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.228 03:26:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.228 03:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:52.228 ************************************ 00:12:52.228 START TEST nvmf_nmic 00:12:52.228 ************************************ 00:12:52.228 03:26:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:52.486 * Looking for test storage... 00:12:52.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.486 03:26:29 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.486 03:26:29 -- nvmf/common.sh@7 -- # uname -s 00:12:52.486 03:26:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.486 03:26:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.486 03:26:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.486 03:26:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.486 03:26:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.486 03:26:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.486 03:26:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.486 03:26:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.486 03:26:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.486 03:26:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.486 03:26:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.486 03:26:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.486 03:26:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.486 03:26:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.486 03:26:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.486 03:26:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.486 03:26:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.486 03:26:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.486 03:26:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.486 03:26:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.486 03:26:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.486 03:26:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.486 03:26:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.486 03:26:29 -- paths/export.sh@5 -- # export PATH 00:12:52.486 03:26:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.486 03:26:29 -- nvmf/common.sh@47 -- # : 0 00:12:52.486 03:26:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.486 03:26:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.486 03:26:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.486 03:26:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.486 03:26:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.486 03:26:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.486 03:26:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.486 03:26:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.486 03:26:29 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.486 03:26:29 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.486 03:26:29 -- target/nmic.sh@14 -- # nvmftestinit 00:12:52.486 03:26:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:52.486 03:26:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.486 03:26:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:52.486 03:26:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:52.486 03:26:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:52.486 03:26:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.486 03:26:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.486 03:26:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.486 03:26:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:52.486 03:26:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:52.486 03:26:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.486 03:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:54.391 03:26:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:54.391 03:26:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.391 03:26:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.391 03:26:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.391 03:26:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.391 03:26:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.391 03:26:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.391 03:26:31 -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.391 03:26:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.391 03:26:31 -- nvmf/common.sh@296 -- # e810=() 00:12:54.391 03:26:31 -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.391 03:26:31 -- nvmf/common.sh@297 -- # x722=() 00:12:54.391 03:26:31 -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.391 03:26:31 -- nvmf/common.sh@298 -- # mlx=() 00:12:54.391 03:26:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.391 03:26:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.391 03:26:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.391 03:26:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.391 03:26:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.391 03:26:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.391 03:26:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.391 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.391 03:26:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.391 03:26:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.391 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.391 03:26:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.391 03:26:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.391 03:26:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.391 03:26:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.391 03:26:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:54.391 03:26:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.391 03:26:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.391 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.391 03:26:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.391 03:26:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.391 03:26:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.391 03:26:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:54.391 03:26:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.391 03:26:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.391 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.392 03:26:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.392 03:26:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:54.392 03:26:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:54.392 03:26:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:54.392 03:26:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:54.392 03:26:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:54.392 03:26:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.392 03:26:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.392 03:26:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.392 03:26:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.392 03:26:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.392 03:26:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.392 03:26:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.392 03:26:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.392 03:26:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.392 03:26:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.392 03:26:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.392 03:26:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.392 03:26:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.392 03:26:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.392 03:26:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.392 03:26:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.392 03:26:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.392 03:26:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.392 03:26:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.392 03:26:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:12:54.392 00:12:54.392 --- 10.0.0.2 ping statistics --- 00:12:54.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.392 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:12:54.392 03:26:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:54.392 00:12:54.392 --- 10.0.0.1 ping statistics --- 00:12:54.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.392 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:54.392 03:26:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.392 03:26:31 -- nvmf/common.sh@411 -- # return 0 00:12:54.392 03:26:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:54.392 03:26:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.392 03:26:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:54.392 03:26:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:54.392 03:26:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.392 03:26:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:54.392 03:26:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:54.392 03:26:31 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:54.392 03:26:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:54.392 03:26:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:54.392 03:26:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.392 03:26:31 -- nvmf/common.sh@470 -- # nvmfpid=235848 00:12:54.392 03:26:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.392 03:26:31 -- nvmf/common.sh@471 -- # waitforlisten 235848 00:12:54.392 03:26:31 -- common/autotest_common.sh@817 -- # '[' -z 235848 ']' 00:12:54.392 03:26:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.392 03:26:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:54.392 03:26:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.392 03:26:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:54.392 03:26:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.650 [2024-04-19 03:26:31.975029] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:12:54.650 [2024-04-19 03:26:31.975100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.650 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.650 [2024-04-19 03:26:32.038545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.650 [2024-04-19 03:26:32.145886] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.650 [2024-04-19 03:26:32.145943] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.650 [2024-04-19 03:26:32.145972] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.650 [2024-04-19 03:26:32.145983] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.650 [2024-04-19 03:26:32.145993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.650 [2024-04-19 03:26:32.146042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.650 [2024-04-19 03:26:32.146441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.650 [2024-04-19 03:26:32.146465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.650 [2024-04-19 03:26:32.146469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.910 03:26:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:54.910 03:26:32 -- common/autotest_common.sh@850 -- # return 0 00:12:54.910 03:26:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:54.910 03:26:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 03:26:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.910 03:26:32 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 [2024-04-19 03:26:32.308197] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 Malloc0 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 [2024-04-19 03:26:32.361562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:54.910 test case1: single bdev can't be used in multiple subsystems 00:12:54.910 03:26:32 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@28 -- # nmic_status=0 00:12:54.910 03:26:32 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 [2024-04-19 03:26:32.385392] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:54.910 [2024-04-19 03:26:32.385421] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:54.910 [2024-04-19 03:26:32.385436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 request: 00:12:54.910 { 00:12:54.910 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:54.910 "namespace": { 00:12:54.910 "bdev_name": "Malloc0", 00:12:54.910 "no_auto_visible": false 00:12:54.910 }, 00:12:54.910 "method": "nvmf_subsystem_add_ns", 00:12:54.910 "req_id": 1 00:12:54.910 } 00:12:54.910 Got JSON-RPC error response 00:12:54.910 response: 00:12:54.910 { 00:12:54.910 "code": -32602, 00:12:54.910 "message": "Invalid parameters" 00:12:54.910 } 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@29 -- # nmic_status=1 00:12:54.910 03:26:32 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:54.910 03:26:32 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:54.910 Adding namespace failed - expected result. 00:12:54.910 03:26:32 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:54.910 test case2: host connect to nvmf target in multiple paths 00:12:54.910 03:26:32 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:54.910 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.910 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 [2024-04-19 03:26:32.393507] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:54.910 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.910 03:26:32 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.480 03:26:32 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:56.416 03:26:33 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.416 03:26:33 -- common/autotest_common.sh@1184 -- # local i=0 00:12:56.416 03:26:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.416 03:26:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:56.416 03:26:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:58.314 03:26:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:58.314 03:26:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:58.314 03:26:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.314 03:26:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:58.314 03:26:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.314 03:26:35 -- common/autotest_common.sh@1194 -- # return 0 00:12:58.314 03:26:35 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:58.314 [global] 00:12:58.314 thread=1 00:12:58.314 invalidate=1 00:12:58.314 rw=write 00:12:58.314 time_based=1 00:12:58.314 runtime=1 00:12:58.314 ioengine=libaio 00:12:58.314 direct=1 00:12:58.314 bs=4096 00:12:58.314 iodepth=1 00:12:58.314 norandommap=0 00:12:58.314 numjobs=1 00:12:58.314 00:12:58.314 verify_dump=1 00:12:58.314 verify_backlog=512 00:12:58.314 verify_state_save=0 00:12:58.314 do_verify=1 00:12:58.314 verify=crc32c-intel 00:12:58.314 [job0] 00:12:58.314 filename=/dev/nvme0n1 00:12:58.314 Could not set queue depth (nvme0n1) 00:12:58.314 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.314 fio-3.35 00:12:58.314 Starting 1 thread 00:12:59.687 00:12:59.687 job0: (groupid=0, jobs=1): err= 0: pid=236475: Fri Apr 19 03:26:37 2024 00:12:59.687 read: IOPS=1185, BW=4741KiB/s (4855kB/s)(4784KiB/1009msec) 00:12:59.687 slat (nsec): min=5450, max=54674, avg=14553.12, stdev=4996.17 00:12:59.687 clat (usec): min=271, max=41433, avg=444.08, stdev=2043.66 00:12:59.687 lat (usec): min=280, max=41465, avg=458.63, stdev=2044.42 00:12:59.687 clat percentiles (usec): 00:12:59.687 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:12:59.687 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:12:59.687 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 396], 00:12:59.687 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[41681], 99.95th=[41681], 00:12:59.687 | 99.99th=[41681] 00:12:59.687 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:12:59.687 slat (usec): min=8, max=31606, avg=39.86, stdev=806.02 00:12:59.687 clat (usec): min=189, max=512, avg=250.30, stdev=55.24 00:12:59.687 lat (usec): min=200, max=31995, avg=290.17, stdev=811.85 00:12:59.687 clat percentiles (usec): 00:12:59.687 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:12:59.687 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:12:59.687 | 70.00th=[ 251], 80.00th=[ 281], 90.00th=[ 338], 95.00th=[ 383], 00:12:59.687 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 465], 99.95th=[ 515], 00:12:59.687 | 99.99th=[ 515] 00:12:59.687 bw ( KiB/s): min= 4320, max= 7968, per=100.00%, avg=6144.00, stdev=2579.53, samples=2 00:12:59.687 iops : min= 1080, max= 1992, avg=1536.00, stdev=644.88, samples=2 00:12:59.687 lat (usec) : 250=39.09%, 500=60.32%, 750=0.44% 00:12:59.687 lat (msec) : 2=0.04%, 50=0.11% 00:12:59.687 cpu : usr=4.56%, sys=5.36%, ctx=2734, majf=0, minf=2 00:12:59.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.687 issued rwts: total=1196,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.687 00:12:59.687 Run status group 0 (all jobs): 00:12:59.687 READ: bw=4741KiB/s (4855kB/s), 4741KiB/s-4741KiB/s (4855kB/s-4855kB/s), io=4784KiB (4899kB), run=1009-1009msec 00:12:59.687 WRITE: bw=6089KiB/s (6235kB/s), 6089KiB/s-6089KiB/s (6235kB/s-6235kB/s), io=6144KiB (6291kB), run=1009-1009msec 00:12:59.687 00:12:59.687 Disk stats (read/write): 00:12:59.687 nvme0n1: ios=1216/1536, merge=0/0, ticks=1337/356, in_queue=1693, util=98.70% 00:12:59.687 03:26:37 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:59.687 03:26:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.687 03:26:37 -- common/autotest_common.sh@1205 -- # local i=0 00:12:59.687 03:26:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:59.687 03:26:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.687 03:26:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:59.687 03:26:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.687 03:26:37 -- common/autotest_common.sh@1217 -- # return 0 00:12:59.687 03:26:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:59.687 03:26:37 -- target/nmic.sh@53 -- # nvmftestfini 00:12:59.687 03:26:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:59.687 03:26:37 -- nvmf/common.sh@117 -- # sync 00:12:59.687 03:26:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.687 03:26:37 -- nvmf/common.sh@120 -- # set +e 00:12:59.688 03:26:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.688 03:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.688 rmmod nvme_tcp 00:12:59.688 rmmod nvme_fabrics 00:12:59.688 rmmod nvme_keyring 00:12:59.688 03:26:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.688 03:26:37 -- nvmf/common.sh@124 -- # set -e 00:12:59.688 03:26:37 -- nvmf/common.sh@125 -- # return 0 00:12:59.688 03:26:37 -- nvmf/common.sh@478 -- # '[' -n 235848 ']' 00:12:59.688 03:26:37 -- nvmf/common.sh@479 -- # killprocess 235848 00:12:59.688 03:26:37 -- common/autotest_common.sh@936 -- # '[' -z 235848 ']' 00:12:59.688 03:26:37 -- common/autotest_common.sh@940 -- # kill -0 235848 00:12:59.688 03:26:37 -- common/autotest_common.sh@941 -- # uname 00:12:59.688 03:26:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:59.688 03:26:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 235848 00:12:59.946 03:26:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:59.946 03:26:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:59.946 03:26:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 235848' 00:12:59.946 killing process with pid 235848 00:12:59.946 03:26:37 -- common/autotest_common.sh@955 -- # kill 235848 00:12:59.946 03:26:37 -- common/autotest_common.sh@960 -- # wait 235848 00:13:00.205 03:26:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:00.205 03:26:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:00.205 03:26:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:00.205 03:26:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.205 03:26:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.205 03:26:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.205 03:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.205 03:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.110 03:26:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.110 00:13:02.110 real 0m9.853s 00:13:02.110 user 0m22.304s 00:13:02.110 sys 0m2.316s 00:13:02.110 03:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.110 03:26:39 -- common/autotest_common.sh@10 -- # set +x 00:13:02.110 ************************************ 00:13:02.110 END TEST nvmf_nmic 00:13:02.110 ************************************ 00:13:02.110 03:26:39 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:02.110 03:26:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.110 03:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.110 03:26:39 -- common/autotest_common.sh@10 -- # set +x 00:13:02.369 ************************************ 00:13:02.369 START TEST nvmf_fio_target 00:13:02.369 ************************************ 00:13:02.369 03:26:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:02.369 * Looking for test storage... 00:13:02.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.369 03:26:39 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.369 03:26:39 -- nvmf/common.sh@7 -- # uname -s 00:13:02.369 03:26:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.369 03:26:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.369 03:26:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.369 03:26:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.369 03:26:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.369 03:26:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.369 03:26:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.369 03:26:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.369 03:26:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.369 03:26:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.369 03:26:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.369 03:26:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.369 03:26:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.369 03:26:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.369 03:26:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.369 03:26:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.369 03:26:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.369 03:26:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.369 03:26:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.369 03:26:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.370 03:26:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.370 03:26:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.370 03:26:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.370 03:26:39 -- paths/export.sh@5 -- # export PATH 00:13:02.370 03:26:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.370 03:26:39 -- nvmf/common.sh@47 -- # : 0 00:13:02.370 03:26:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.370 03:26:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.370 03:26:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.370 03:26:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.370 03:26:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.370 03:26:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.370 03:26:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.370 03:26:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.370 03:26:39 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.370 03:26:39 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.370 03:26:39 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.370 03:26:39 -- target/fio.sh@16 -- # nvmftestinit 00:13:02.370 03:26:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:02.370 03:26:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.370 03:26:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:02.370 03:26:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:02.370 03:26:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:02.370 03:26:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.370 03:26:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.370 03:26:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.370 03:26:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:02.370 03:26:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:02.370 03:26:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.370 03:26:39 -- common/autotest_common.sh@10 -- # set +x 00:13:04.327 03:26:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:04.327 03:26:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.327 03:26:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.327 03:26:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.327 03:26:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.327 03:26:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.327 03:26:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.327 03:26:41 -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.327 03:26:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.327 03:26:41 -- nvmf/common.sh@296 -- # e810=() 00:13:04.327 03:26:41 -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.327 03:26:41 -- nvmf/common.sh@297 -- # x722=() 00:13:04.327 03:26:41 -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.327 03:26:41 -- nvmf/common.sh@298 -- # mlx=() 00:13:04.327 03:26:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.327 03:26:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.327 03:26:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.328 03:26:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.328 03:26:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.328 03:26:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.328 03:26:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.328 03:26:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.328 03:26:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.328 03:26:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:04.328 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:04.328 03:26:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.328 03:26:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:04.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:04.328 03:26:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.328 03:26:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.328 03:26:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.328 03:26:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:04.328 03:26:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.328 03:26:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:04.328 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:04.328 03:26:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.328 03:26:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.328 03:26:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.328 03:26:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:04.328 03:26:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.328 03:26:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:04.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:04.328 03:26:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.328 03:26:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:04.328 03:26:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:04.328 03:26:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:04.328 03:26:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.328 03:26:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.328 03:26:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.328 03:26:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.328 03:26:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.328 03:26:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.328 03:26:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.328 03:26:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.328 03:26:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.328 03:26:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.328 03:26:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.328 03:26:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.328 03:26:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.328 03:26:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.328 03:26:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.328 03:26:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.328 03:26:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.328 03:26:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.328 03:26:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.328 03:26:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:13:04.328 00:13:04.328 --- 10.0.0.2 ping statistics --- 00:13:04.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.328 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:04.328 03:26:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:04.328 00:13:04.328 --- 10.0.0.1 ping statistics --- 00:13:04.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.328 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:04.328 03:26:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.328 03:26:41 -- nvmf/common.sh@411 -- # return 0 00:13:04.328 03:26:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:04.328 03:26:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.328 03:26:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:04.328 03:26:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.328 03:26:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:04.328 03:26:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:04.328 03:26:41 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:04.328 03:26:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:04.328 03:26:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:04.328 03:26:41 -- common/autotest_common.sh@10 -- # set +x 00:13:04.587 03:26:41 -- nvmf/common.sh@470 -- # nvmfpid=238560 00:13:04.587 03:26:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.587 03:26:41 -- nvmf/common.sh@471 -- # waitforlisten 238560 00:13:04.587 03:26:41 -- common/autotest_common.sh@817 -- # '[' -z 238560 ']' 00:13:04.587 03:26:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.587 03:26:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:04.587 03:26:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.587 03:26:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:04.587 03:26:41 -- common/autotest_common.sh@10 -- # set +x 00:13:04.587 [2024-04-19 03:26:41.932945] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:04.587 [2024-04-19 03:26:41.933024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.587 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.587 [2024-04-19 03:26:42.001307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.587 [2024-04-19 03:26:42.111968] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.587 [2024-04-19 03:26:42.112023] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.587 [2024-04-19 03:26:42.112036] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.587 [2024-04-19 03:26:42.112047] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.587 [2024-04-19 03:26:42.112057] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.587 [2024-04-19 03:26:42.112149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.587 [2024-04-19 03:26:42.113402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.587 [2024-04-19 03:26:42.113468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.587 [2024-04-19 03:26:42.113472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.845 03:26:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:04.845 03:26:42 -- common/autotest_common.sh@850 -- # return 0 00:13:04.845 03:26:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:04.845 03:26:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:04.845 03:26:42 -- common/autotest_common.sh@10 -- # set +x 00:13:04.845 03:26:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.845 03:26:42 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:05.103 [2024-04-19 03:26:42.482647] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.103 03:26:42 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:05.361 03:26:42 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:05.361 03:26:42 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:05.620 03:26:43 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:05.620 03:26:43 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:05.878 03:26:43 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:05.878 03:26:43 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:06.136 03:26:43 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:06.136 03:26:43 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:06.394 03:26:43 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:06.653 03:26:44 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:06.653 03:26:44 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:06.911 03:26:44 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:06.911 03:26:44 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:07.169 03:26:44 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:07.169 03:26:44 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:07.427 03:26:44 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.684 03:26:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:07.684 03:26:45 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.941 03:26:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:07.941 03:26:45 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.199 03:26:45 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.457 [2024-04-19 03:26:45.767549] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.457 03:26:45 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:08.713 03:26:46 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:08.713 03:26:46 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.276 03:26:46 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:09.276 03:26:46 -- common/autotest_common.sh@1184 -- # local i=0 00:13:09.276 03:26:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.276 03:26:46 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:09.276 03:26:46 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:09.276 03:26:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:11.801 03:26:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:11.801 03:26:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:11.801 03:26:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.801 03:26:48 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:11.801 03:26:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.801 03:26:48 -- common/autotest_common.sh@1194 -- # return 0 00:13:11.801 03:26:48 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:11.801 [global] 00:13:11.801 thread=1 00:13:11.801 invalidate=1 00:13:11.801 rw=write 00:13:11.801 time_based=1 00:13:11.801 runtime=1 00:13:11.801 ioengine=libaio 00:13:11.801 direct=1 00:13:11.801 bs=4096 00:13:11.801 iodepth=1 00:13:11.801 norandommap=0 00:13:11.801 numjobs=1 00:13:11.801 00:13:11.801 verify_dump=1 00:13:11.801 verify_backlog=512 00:13:11.801 verify_state_save=0 00:13:11.801 do_verify=1 00:13:11.801 verify=crc32c-intel 00:13:11.801 [job0] 00:13:11.801 filename=/dev/nvme0n1 00:13:11.801 [job1] 00:13:11.801 filename=/dev/nvme0n2 00:13:11.801 [job2] 00:13:11.801 filename=/dev/nvme0n3 00:13:11.801 [job3] 00:13:11.801 filename=/dev/nvme0n4 00:13:11.801 Could not set queue depth (nvme0n1) 00:13:11.801 Could not set queue depth (nvme0n2) 00:13:11.801 Could not set queue depth (nvme0n3) 00:13:11.801 Could not set queue depth (nvme0n4) 00:13:11.801 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.801 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.801 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.801 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.801 fio-3.35 00:13:11.801 Starting 4 threads 00:13:13.171 00:13:13.171 job0: (groupid=0, jobs=1): err= 0: pid=239627: Fri Apr 19 03:26:50 2024 00:13:13.171 read: IOPS=1095, BW=4383KiB/s (4488kB/s)(4488KiB/1024msec) 00:13:13.171 slat (nsec): min=5569, max=49482, avg=8078.05, stdev=3865.66 00:13:13.171 clat (usec): min=322, max=42019, avg=584.94, stdev=3000.26 00:13:13.171 lat (usec): min=330, max=42037, avg=593.02, stdev=3001.08 00:13:13.171 clat percentiles (usec): 00:13:13.171 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:13:13.171 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:13:13.171 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 408], 00:13:13.171 | 99.00th=[ 494], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:13.171 | 99.99th=[42206] 00:13:13.171 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:13:13.171 slat (usec): min=7, max=1122, avg=13.98, stdev=29.34 00:13:13.171 clat (usec): min=171, max=592, avg=213.90, stdev=34.12 00:13:13.171 lat (usec): min=180, max=1462, avg=227.87, stdev=49.39 00:13:13.171 clat percentiles (usec): 00:13:13.171 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:13:13.171 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:13:13.171 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 265], 00:13:13.171 | 99.00th=[ 355], 99.50th=[ 400], 99.90th=[ 537], 99.95th=[ 594], 00:13:13.171 | 99.99th=[ 594] 00:13:13.171 bw ( KiB/s): min= 4096, max= 8192, per=38.96%, avg=6144.00, stdev=2896.31, samples=2 00:13:13.171 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:13:13.171 lat (usec) : 250=52.78%, 500=46.73%, 750=0.23%, 1000=0.04% 00:13:13.171 lat (msec) : 50=0.23% 00:13:13.171 cpu : usr=2.64%, sys=3.13%, ctx=2661, majf=0, minf=1 00:13:13.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.171 issued rwts: total=1122,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.171 job1: (groupid=0, jobs=1): err= 0: pid=239628: Fri Apr 19 03:26:50 2024 00:13:13.171 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:13:13.171 slat (nsec): min=8443, max=29561, avg=15849.27, stdev=3796.30 00:13:13.171 clat (usec): min=40551, max=42024, avg=41186.72, stdev=449.39 00:13:13.171 lat (usec): min=40560, max=42043, avg=41202.57, stdev=452.12 00:13:13.171 clat percentiles (usec): 00:13:13.171 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:13.171 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:13.171 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:13.171 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:13.171 | 99.99th=[42206] 00:13:13.171 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:13:13.171 slat (nsec): min=7726, max=54295, avg=13738.44, stdev=6364.14 00:13:13.171 clat (usec): min=189, max=341, avg=219.27, stdev=19.16 00:13:13.171 lat (usec): min=198, max=372, avg=233.01, stdev=20.44 00:13:13.172 clat percentiles (usec): 00:13:13.172 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:13:13.172 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:13:13.172 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 249], 00:13:13.172 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 343], 99.95th=[ 343], 00:13:13.172 | 99.99th=[ 343] 00:13:13.172 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:13:13.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:13.172 lat (usec) : 250=91.20%, 500=4.68% 00:13:13.172 lat (msec) : 50=4.12% 00:13:13.172 cpu : usr=0.39%, sys=0.58%, ctx=535, majf=0, minf=1 00:13:13.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.172 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.172 job2: (groupid=0, jobs=1): err= 0: pid=239629: Fri Apr 19 03:26:50 2024 00:13:13.172 read: IOPS=1433, BW=5734KiB/s (5872kB/s)(5740KiB/1001msec) 00:13:13.172 slat (nsec): min=5756, max=61443, avg=11448.38, stdev=7477.38 00:13:13.172 clat (usec): min=300, max=1228, avg=404.75, stdev=66.00 00:13:13.172 lat (usec): min=307, max=1245, avg=416.20, stdev=69.05 00:13:13.172 clat percentiles (usec): 00:13:13.172 | 1.00th=[ 322], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:13:13.172 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 392], 00:13:13.172 | 70.00th=[ 408], 80.00th=[ 461], 90.00th=[ 502], 95.00th=[ 523], 00:13:13.172 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 1156], 99.95th=[ 1221], 00:13:13.172 | 99.99th=[ 1221] 00:13:13.172 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:13.172 slat (nsec): min=7554, max=59617, avg=14816.70, stdev=7536.78 00:13:13.172 clat (usec): min=191, max=1789, avg=240.22, stdev=69.73 00:13:13.172 lat (usec): min=200, max=1827, avg=255.03, stdev=72.16 00:13:13.172 clat percentiles (usec): 00:13:13.172 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:13:13.172 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:13:13.172 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 310], 00:13:13.172 | 99.00th=[ 400], 99.50th=[ 453], 99.90th=[ 1762], 99.95th=[ 1795], 00:13:13.172 | 99.99th=[ 1795] 00:13:13.172 bw ( KiB/s): min= 8192, max= 8192, per=51.95%, avg=8192.00, stdev= 0.00, samples=1 00:13:13.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:13.172 lat (usec) : 250=39.75%, 500=55.00%, 750=5.05%, 1000=0.07% 00:13:13.172 lat (msec) : 2=0.13% 00:13:13.172 cpu : usr=3.50%, sys=4.00%, ctx=2972, majf=0, minf=2 00:13:13.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.172 issued rwts: total=1435,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.172 job3: (groupid=0, jobs=1): err= 0: pid=239630: Fri Apr 19 03:26:50 2024 00:13:13.172 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:13:13.172 slat (nsec): min=13255, max=36166, avg=20252.71, stdev=7690.63 00:13:13.172 clat (usec): min=40785, max=42249, avg=41233.60, stdev=464.42 00:13:13.172 lat (usec): min=40804, max=42264, avg=41253.86, stdev=464.12 00:13:13.172 clat percentiles (usec): 00:13:13.172 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:13.172 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:13.172 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:13:13.172 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:13.172 | 99.99th=[42206] 00:13:13.172 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:13:13.172 slat (nsec): min=9340, max=80614, avg=17967.44, stdev=9460.12 00:13:13.172 clat (usec): min=199, max=1344, avg=313.78, stdev=99.68 00:13:13.172 lat (usec): min=214, max=1354, avg=331.75, stdev=101.66 00:13:13.172 clat percentiles (usec): 00:13:13.172 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 243], 00:13:13.172 | 30.00th=[ 253], 40.00th=[ 273], 50.00th=[ 302], 60.00th=[ 322], 00:13:13.172 | 70.00th=[ 338], 80.00th=[ 367], 90.00th=[ 408], 95.00th=[ 437], 00:13:13.172 | 99.00th=[ 578], 99.50th=[ 971], 99.90th=[ 1352], 99.95th=[ 1352], 00:13:13.172 | 99.99th=[ 1352] 00:13:13.172 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:13:13.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:13.172 lat (usec) : 250=26.45%, 500=67.92%, 750=1.13%, 1000=0.19% 00:13:13.172 lat (msec) : 2=0.38%, 50=3.94% 00:13:13.172 cpu : usr=0.39%, sys=1.35%, ctx=534, majf=0, minf=1 00:13:13.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.172 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.172 00:13:13.172 Run status group 0 (all jobs): 00:13:13.172 READ: bw=9.77MiB/s (10.2MB/s), 80.8KiB/s-5734KiB/s (82.8kB/s-5872kB/s), io=10.2MiB (10.6MB), run=1001-1039msec 00:13:13.172 WRITE: bw=15.4MiB/s (16.1MB/s), 1971KiB/s-6138KiB/s (2018kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1039msec 00:13:13.172 00:13:13.172 Disk stats (read/write): 00:13:13.172 nvme0n1: ios=1169/1536, merge=0/0, ticks=540/311, in_queue=851, util=85.77% 00:13:13.172 nvme0n2: ios=73/512, merge=0/0, ticks=828/109, in_queue=937, util=91.35% 00:13:13.172 nvme0n3: ios=1154/1536, merge=0/0, ticks=1318/349, in_queue=1667, util=93.64% 00:13:13.172 nvme0n4: ios=70/512, merge=0/0, ticks=750/156, in_queue=906, util=96.00% 00:13:13.172 03:26:50 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:13.172 [global] 00:13:13.172 thread=1 00:13:13.172 invalidate=1 00:13:13.172 rw=randwrite 00:13:13.172 time_based=1 00:13:13.172 runtime=1 00:13:13.172 ioengine=libaio 00:13:13.172 direct=1 00:13:13.172 bs=4096 00:13:13.172 iodepth=1 00:13:13.172 norandommap=0 00:13:13.172 numjobs=1 00:13:13.172 00:13:13.172 verify_dump=1 00:13:13.172 verify_backlog=512 00:13:13.172 verify_state_save=0 00:13:13.172 do_verify=1 00:13:13.172 verify=crc32c-intel 00:13:13.172 [job0] 00:13:13.172 filename=/dev/nvme0n1 00:13:13.172 [job1] 00:13:13.172 filename=/dev/nvme0n2 00:13:13.172 [job2] 00:13:13.172 filename=/dev/nvme0n3 00:13:13.172 [job3] 00:13:13.172 filename=/dev/nvme0n4 00:13:13.172 Could not set queue depth (nvme0n1) 00:13:13.172 Could not set queue depth (nvme0n2) 00:13:13.172 Could not set queue depth (nvme0n3) 00:13:13.172 Could not set queue depth (nvme0n4) 00:13:13.172 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:13.172 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:13.172 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:13.172 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:13.172 fio-3.35 00:13:13.172 Starting 4 threads 00:13:14.544 00:13:14.544 job0: (groupid=0, jobs=1): err= 0: pid=239854: Fri Apr 19 03:26:51 2024 00:13:14.544 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:14.544 slat (nsec): min=6218, max=75360, avg=14666.10, stdev=9554.10 00:13:14.544 clat (usec): min=290, max=791, avg=342.65, stdev=40.03 00:13:14.544 lat (usec): min=300, max=832, avg=357.32, stdev=45.37 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:13:14.544 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:13:14.544 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 412], 00:13:14.544 | 99.00th=[ 482], 99.50th=[ 545], 99.90th=[ 676], 99.95th=[ 791], 00:13:14.544 | 99.99th=[ 791] 00:13:14.544 write: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec); 0 zone resets 00:13:14.544 slat (nsec): min=6067, max=63332, avg=14970.52, stdev=7320.62 00:13:14.544 clat (usec): min=182, max=467, avg=226.01, stdev=36.03 00:13:14.544 lat (usec): min=191, max=485, avg=240.98, stdev=39.21 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:13:14.544 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:13:14.544 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 289], 00:13:14.544 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 469], 99.95th=[ 469], 00:13:14.544 | 99.99th=[ 469] 00:13:14.544 bw ( KiB/s): min= 8192, max= 8192, per=55.14%, avg=8192.00, stdev= 0.00, samples=1 00:13:14.544 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:14.544 lat (usec) : 250=46.01%, 500=53.69%, 750=0.27%, 1000=0.03% 00:13:14.544 cpu : usr=2.90%, sys=5.10%, ctx=3347, majf=0, minf=1 00:13:14.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 issued rwts: total=1536,1811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.544 job1: (groupid=0, jobs=1): err= 0: pid=239855: Fri Apr 19 03:26:51 2024 00:13:14.544 read: IOPS=51, BW=208KiB/s (213kB/s)(216KiB/1039msec) 00:13:14.544 slat (nsec): min=5846, max=35754, avg=11776.00, stdev=8480.71 00:13:14.544 clat (usec): min=371, max=42206, avg=16528.68, stdev=20325.25 00:13:14.544 lat (usec): min=377, max=42220, avg=16540.46, stdev=20331.74 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 420], 00:13:14.544 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 537], 60.00th=[ 545], 00:13:14.544 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:14.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:14.544 | 99.99th=[42206] 00:13:14.544 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:13:14.544 slat (nsec): min=7394, max=65302, avg=14309.99, stdev=7992.33 00:13:14.544 clat (usec): min=189, max=580, avg=265.69, stdev=69.57 00:13:14.544 lat (usec): min=197, max=608, avg=280.00, stdev=73.10 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:13:14.544 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:13:14.544 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 355], 95.00th=[ 449], 00:13:14.544 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 578], 99.95th=[ 578], 00:13:14.544 | 99.99th=[ 578] 00:13:14.544 bw ( KiB/s): min= 4096, max= 4096, per=27.57%, avg=4096.00, stdev= 0.00, samples=1 00:13:14.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:14.544 lat (usec) : 250=52.83%, 500=38.16%, 750=5.30% 00:13:14.544 lat (msec) : 50=3.71% 00:13:14.544 cpu : usr=0.87%, sys=0.67%, ctx=567, majf=0, minf=1 00:13:14.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.544 job2: (groupid=0, jobs=1): err= 0: pid=239856: Fri Apr 19 03:26:51 2024 00:13:14.544 read: IOPS=194, BW=780KiB/s (798kB/s)(796KiB/1021msec) 00:13:14.544 slat (nsec): min=5773, max=43192, avg=8185.79, stdev=5485.82 00:13:14.544 clat (usec): min=326, max=42374, avg=4410.92, stdev=12149.73 00:13:14.544 lat (usec): min=332, max=42387, avg=4419.10, stdev=12154.11 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 343], 5.00th=[ 429], 10.00th=[ 453], 20.00th=[ 457], 00:13:14.544 | 30.00th=[ 465], 40.00th=[ 469], 50.00th=[ 474], 60.00th=[ 482], 00:13:14.544 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 627], 95.00th=[42206], 00:13:14.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:14.544 | 99.99th=[42206] 00:13:14.544 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:13:14.544 slat (nsec): min=7285, max=56270, avg=15302.31, stdev=9015.21 00:13:14.544 clat (usec): min=200, max=455, avg=255.60, stdev=34.90 00:13:14.544 lat (usec): min=212, max=494, avg=270.90, stdev=37.78 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:13:14.544 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:13:14.544 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 330], 00:13:14.544 | 99.00th=[ 367], 99.50th=[ 441], 99.90th=[ 457], 99.95th=[ 457], 00:13:14.544 | 99.99th=[ 457] 00:13:14.544 bw ( KiB/s): min= 4096, max= 4096, per=27.57%, avg=4096.00, stdev= 0.00, samples=1 00:13:14.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:14.544 lat (usec) : 250=40.79%, 500=53.02%, 750=3.52% 00:13:14.544 lat (msec) : 50=2.67% 00:13:14.544 cpu : usr=0.49%, sys=1.27%, ctx=711, majf=0, minf=2 00:13:14.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 issued rwts: total=199,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.544 job3: (groupid=0, jobs=1): err= 0: pid=239857: Fri Apr 19 03:26:51 2024 00:13:14.544 read: IOPS=522, BW=2092KiB/s (2142kB/s)(2100KiB/1004msec) 00:13:14.544 slat (nsec): min=5828, max=50041, avg=7291.86, stdev=3947.56 00:13:14.544 clat (usec): min=278, max=42226, avg=1356.73, stdev=6458.19 00:13:14.544 lat (usec): min=285, max=42233, avg=1364.02, stdev=6460.74 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 297], 00:13:14.544 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:13:14.544 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 412], 00:13:14.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:14.544 | 99.99th=[42206] 00:13:14.544 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:13:14.544 slat (nsec): min=7273, max=79793, avg=13084.62, stdev=8123.73 00:13:14.544 clat (usec): min=183, max=556, avg=263.22, stdev=75.26 00:13:14.544 lat (usec): min=191, max=599, avg=276.30, stdev=79.61 00:13:14.544 clat percentiles (usec): 00:13:14.544 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:13:14.544 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 241], 00:13:14.544 | 70.00th=[ 265], 80.00th=[ 338], 90.00th=[ 396], 95.00th=[ 412], 00:13:14.544 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 537], 99.95th=[ 553], 00:13:14.544 | 99.99th=[ 553] 00:13:14.544 bw ( KiB/s): min= 8192, max= 8192, per=55.14%, avg=8192.00, stdev= 0.00, samples=1 00:13:14.544 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:14.544 lat (usec) : 250=42.87%, 500=55.97%, 750=0.26% 00:13:14.544 lat (msec) : 4=0.06%, 50=0.84% 00:13:14.544 cpu : usr=0.80%, sys=2.59%, ctx=1550, majf=0, minf=1 00:13:14.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.544 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.544 00:13:14.544 Run status group 0 (all jobs): 00:13:14.544 READ: bw=8909KiB/s (9122kB/s), 208KiB/s-6138KiB/s (213kB/s-6285kB/s), io=9256KiB (9478kB), run=1001-1039msec 00:13:14.544 WRITE: bw=14.5MiB/s (15.2MB/s), 1971KiB/s-7237KiB/s (2018kB/s-7410kB/s), io=15.1MiB (15.8MB), run=1001-1039msec 00:13:14.544 00:13:14.544 Disk stats (read/write): 00:13:14.544 nvme0n1: ios=1395/1536, merge=0/0, ticks=470/320, in_queue=790, util=87.17% 00:13:14.544 nvme0n2: ios=88/512, merge=0/0, ticks=1614/130, in_queue=1744, util=99.80% 00:13:14.544 nvme0n3: ios=194/512, merge=0/0, ticks=668/130, in_queue=798, util=88.91% 00:13:14.544 nvme0n4: ios=546/1024, merge=0/0, ticks=1492/245, in_queue=1737, util=98.00% 00:13:14.544 03:26:51 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:14.544 [global] 00:13:14.544 thread=1 00:13:14.544 invalidate=1 00:13:14.544 rw=write 00:13:14.544 time_based=1 00:13:14.544 runtime=1 00:13:14.544 ioengine=libaio 00:13:14.544 direct=1 00:13:14.544 bs=4096 00:13:14.544 iodepth=128 00:13:14.544 norandommap=0 00:13:14.544 numjobs=1 00:13:14.544 00:13:14.544 verify_dump=1 00:13:14.544 verify_backlog=512 00:13:14.544 verify_state_save=0 00:13:14.544 do_verify=1 00:13:14.544 verify=crc32c-intel 00:13:14.544 [job0] 00:13:14.544 filename=/dev/nvme0n1 00:13:14.544 [job1] 00:13:14.544 filename=/dev/nvme0n2 00:13:14.544 [job2] 00:13:14.544 filename=/dev/nvme0n3 00:13:14.544 [job3] 00:13:14.544 filename=/dev/nvme0n4 00:13:14.544 Could not set queue depth (nvme0n1) 00:13:14.544 Could not set queue depth (nvme0n2) 00:13:14.544 Could not set queue depth (nvme0n3) 00:13:14.544 Could not set queue depth (nvme0n4) 00:13:14.544 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:14.544 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:14.544 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:14.544 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:14.544 fio-3.35 00:13:14.544 Starting 4 threads 00:13:15.926 00:13:15.926 job0: (groupid=0, jobs=1): err= 0: pid=240089: Fri Apr 19 03:26:53 2024 00:13:15.926 read: IOPS=3431, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1004msec) 00:13:15.926 slat (usec): min=2, max=15578, avg=119.14, stdev=786.28 00:13:15.926 clat (usec): min=615, max=44721, avg=16233.17, stdev=8335.02 00:13:15.926 lat (usec): min=651, max=52260, avg=16352.31, stdev=8366.77 00:13:15.926 clat percentiles (usec): 00:13:15.926 | 1.00th=[ 1188], 5.00th=[ 6259], 10.00th=[ 7635], 20.00th=[10159], 00:13:15.926 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13960], 60.00th=[15270], 00:13:15.926 | 70.00th=[17433], 80.00th=[23200], 90.00th=[28705], 95.00th=[34341], 00:13:15.926 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:13:15.926 | 99.99th=[44827] 00:13:15.926 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:13:15.926 slat (usec): min=3, max=79675, avg=153.05, stdev=1992.99 00:13:15.926 clat (msec): min=3, max=156, avg=14.89, stdev=10.66 00:13:15.926 lat (msec): min=3, max=156, avg=15.05, stdev=10.96 00:13:15.926 clat percentiles (msec): 00:13:15.926 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 10], 00:13:15.926 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:13:15.926 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 21], 95.00th=[ 27], 00:13:15.926 | 99.00th=[ 54], 99.50th=[ 55], 99.90th=[ 157], 99.95th=[ 157], 00:13:15.926 | 99.99th=[ 157] 00:13:15.926 bw ( KiB/s): min=13696, max=14976, per=23.48%, avg=14336.00, stdev=905.10, samples=2 00:13:15.926 iops : min= 3424, max= 3744, avg=3584.00, stdev=226.27, samples=2 00:13:15.926 lat (usec) : 750=0.04% 00:13:15.926 lat (msec) : 2=0.57%, 4=1.64%, 10=19.79%, 20=59.94%, 50=17.24% 00:13:15.926 lat (msec) : 100=0.67%, 250=0.11% 00:13:15.926 cpu : usr=2.49%, sys=5.18%, ctx=384, majf=0, minf=1 00:13:15.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.926 issued rwts: total=3445,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.926 job1: (groupid=0, jobs=1): err= 0: pid=240090: Fri Apr 19 03:26:53 2024 00:13:15.926 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:13:15.926 slat (usec): min=2, max=25178, avg=144.69, stdev=1030.46 00:13:15.926 clat (msec): min=4, max=100, avg=19.26, stdev=16.19 00:13:15.926 lat (msec): min=4, max=100, avg=19.41, stdev=16.27 00:13:15.926 clat percentiles (msec): 00:13:15.926 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:13:15.926 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:13:15.926 | 70.00th=[ 17], 80.00th=[ 23], 90.00th=[ 32], 95.00th=[ 64], 00:13:15.926 | 99.00th=[ 94], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 101], 00:13:15.926 | 99.99th=[ 101] 00:13:15.926 write: IOPS=3744, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1004msec); 0 zone resets 00:13:15.926 slat (usec): min=3, max=45890, avg=118.04, stdev=1012.78 00:13:15.927 clat (usec): min=280, max=55905, avg=15501.51, stdev=9598.95 00:13:15.927 lat (usec): min=1845, max=77171, avg=15619.55, stdev=9658.92 00:13:15.927 clat percentiles (usec): 00:13:15.927 | 1.00th=[ 2933], 5.00th=[ 5866], 10.00th=[ 8586], 20.00th=[ 9896], 00:13:15.927 | 30.00th=[11076], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:13:15.927 | 70.00th=[14484], 80.00th=[17695], 90.00th=[27395], 95.00th=[38011], 00:13:15.927 | 99.00th=[53216], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:13:15.927 | 99.99th=[55837] 00:13:15.927 bw ( KiB/s): min=12664, max=16384, per=23.79%, avg=14524.00, stdev=2630.44, samples=2 00:13:15.927 iops : min= 3166, max= 4096, avg=3631.00, stdev=657.61, samples=2 00:13:15.927 lat (usec) : 500=0.01% 00:13:15.927 lat (msec) : 2=0.14%, 4=0.63%, 10=15.62%, 20=63.53%, 50=16.19% 00:13:15.927 lat (msec) : 100=3.83%, 250=0.05% 00:13:15.927 cpu : usr=3.69%, sys=4.59%, ctx=359, majf=0, minf=1 00:13:15.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.927 issued rwts: total=3584,3759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.927 job2: (groupid=0, jobs=1): err= 0: pid=240093: Fri Apr 19 03:26:53 2024 00:13:15.927 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:15.927 slat (usec): min=2, max=15430, avg=130.35, stdev=782.67 00:13:15.927 clat (usec): min=8763, max=41718, avg=17526.89, stdev=4999.78 00:13:15.927 lat (usec): min=8768, max=41725, avg=17657.25, stdev=5035.96 00:13:15.927 clat percentiles (usec): 00:13:15.927 | 1.00th=[ 8848], 5.00th=[11076], 10.00th=[12649], 20.00th=[13566], 00:13:15.927 | 30.00th=[14615], 40.00th=[15664], 50.00th=[16712], 60.00th=[17957], 00:13:15.927 | 70.00th=[18744], 80.00th=[21103], 90.00th=[25297], 95.00th=[26084], 00:13:15.927 | 99.00th=[33817], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:15.927 | 99.99th=[41681] 00:13:15.927 write: IOPS=3911, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1002msec); 0 zone resets 00:13:15.927 slat (usec): min=3, max=48557, avg=121.81, stdev=1074.75 00:13:15.927 clat (usec): min=1456, max=59781, avg=16422.28, stdev=9707.58 00:13:15.927 lat (usec): min=2191, max=59787, avg=16544.09, stdev=9735.35 00:13:15.927 clat percentiles (usec): 00:13:15.927 | 1.00th=[ 3916], 5.00th=[ 7373], 10.00th=[ 9503], 20.00th=[11338], 00:13:15.927 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14353], 60.00th=[15139], 00:13:15.927 | 70.00th=[16057], 80.00th=[17433], 90.00th=[24249], 95.00th=[42206], 00:13:15.927 | 99.00th=[57410], 99.50th=[59507], 99.90th=[59507], 99.95th=[60031], 00:13:15.927 | 99.99th=[60031] 00:13:15.927 bw ( KiB/s): min=14176, max=16160, per=24.84%, avg=15168.00, stdev=1402.90, samples=2 00:13:15.927 iops : min= 3544, max= 4040, avg=3792.00, stdev=350.72, samples=2 00:13:15.927 lat (msec) : 2=0.01%, 4=0.57%, 10=6.30%, 20=74.89%, 50=16.94% 00:13:15.927 lat (msec) : 100=1.28% 00:13:15.927 cpu : usr=3.40%, sys=5.09%, ctx=363, majf=0, minf=1 00:13:15.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.927 issued rwts: total=3584,3919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.927 job3: (groupid=0, jobs=1): err= 0: pid=240094: Fri Apr 19 03:26:53 2024 00:13:15.927 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:13:15.927 slat (usec): min=2, max=22417, avg=127.39, stdev=854.16 00:13:15.927 clat (usec): min=4731, max=49111, avg=17327.09, stdev=6723.91 00:13:15.927 lat (usec): min=4734, max=49120, avg=17454.48, stdev=6729.57 00:13:15.927 clat percentiles (usec): 00:13:15.927 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12256], 00:13:15.927 | 30.00th=[13829], 40.00th=[14877], 50.00th=[15795], 60.00th=[16581], 00:13:15.927 | 70.00th=[17695], 80.00th=[21365], 90.00th=[25822], 95.00th=[29492], 00:13:15.927 | 99.00th=[39584], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:13:15.927 | 99.99th=[49021] 00:13:15.927 write: IOPS=4070, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:13:15.927 slat (usec): min=3, max=17689, avg=126.53, stdev=777.13 00:13:15.927 clat (usec): min=5130, max=38614, avg=15911.79, stdev=6208.39 00:13:15.927 lat (usec): min=6845, max=41861, avg=16038.32, stdev=6233.80 00:13:15.927 clat percentiles (usec): 00:13:15.927 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10159], 00:13:15.927 | 30.00th=[11600], 40.00th=[13173], 50.00th=[14222], 60.00th=[16188], 00:13:15.927 | 70.00th=[18482], 80.00th=[21103], 90.00th=[25297], 95.00th=[27132], 00:13:15.927 | 99.00th=[34341], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:13:15.927 | 99.99th=[38536] 00:13:15.927 bw ( KiB/s): min=15360, max=16384, per=25.99%, avg=15872.00, stdev=724.08, samples=2 00:13:15.927 iops : min= 3840, max= 4096, avg=3968.00, stdev=181.02, samples=2 00:13:15.927 lat (msec) : 10=13.22%, 20=62.18%, 50=24.60% 00:13:15.927 cpu : usr=3.28%, sys=4.58%, ctx=432, majf=0, minf=1 00:13:15.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.927 issued rwts: total=3584,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.927 00:13:15.927 Run status group 0 (all jobs): 00:13:15.927 READ: bw=55.1MiB/s (57.8MB/s), 13.4MiB/s-14.0MiB/s (14.1MB/s-14.7MB/s), io=55.5MiB (58.1MB), run=1002-1006msec 00:13:15.927 WRITE: bw=59.6MiB/s (62.5MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.7MB/s), io=60.0MiB (62.9MB), run=1002-1006msec 00:13:15.927 00:13:15.927 Disk stats (read/write): 00:13:15.927 nvme0n1: ios=2914/3072, merge=0/0, ticks=27561/21908, in_queue=49469, util=97.80% 00:13:15.927 nvme0n2: ios=2805/3072, merge=0/0, ticks=22325/30349, in_queue=52674, util=86.19% 00:13:15.927 nvme0n3: ios=3129/3290, merge=0/0, ticks=22074/25671, in_queue=47745, util=89.16% 00:13:15.927 nvme0n4: ios=3216/3584, merge=0/0, ticks=17986/21362, in_queue=39348, util=91.38% 00:13:15.927 03:26:53 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:15.927 [global] 00:13:15.927 thread=1 00:13:15.927 invalidate=1 00:13:15.927 rw=randwrite 00:13:15.927 time_based=1 00:13:15.927 runtime=1 00:13:15.927 ioengine=libaio 00:13:15.927 direct=1 00:13:15.927 bs=4096 00:13:15.927 iodepth=128 00:13:15.927 norandommap=0 00:13:15.927 numjobs=1 00:13:15.927 00:13:15.927 verify_dump=1 00:13:15.927 verify_backlog=512 00:13:15.927 verify_state_save=0 00:13:15.927 do_verify=1 00:13:15.927 verify=crc32c-intel 00:13:15.927 [job0] 00:13:15.927 filename=/dev/nvme0n1 00:13:15.927 [job1] 00:13:15.927 filename=/dev/nvme0n2 00:13:15.927 [job2] 00:13:15.927 filename=/dev/nvme0n3 00:13:15.927 [job3] 00:13:15.927 filename=/dev/nvme0n4 00:13:15.927 Could not set queue depth (nvme0n1) 00:13:15.927 Could not set queue depth (nvme0n2) 00:13:15.927 Could not set queue depth (nvme0n3) 00:13:15.927 Could not set queue depth (nvme0n4) 00:13:16.186 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:16.186 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:16.186 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:16.186 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:16.186 fio-3.35 00:13:16.186 Starting 4 threads 00:13:17.562 00:13:17.562 job0: (groupid=0, jobs=1): err= 0: pid=240376: Fri Apr 19 03:26:54 2024 00:13:17.562 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:13:17.562 slat (usec): min=2, max=15071, avg=144.20, stdev=845.80 00:13:17.562 clat (usec): min=2929, max=55103, avg=18677.57, stdev=11514.49 00:13:17.562 lat (usec): min=2934, max=55109, avg=18821.77, stdev=11556.99 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 3032], 5.00th=[ 3392], 10.00th=[ 5997], 20.00th=[10683], 00:13:17.562 | 30.00th=[12125], 40.00th=[13435], 50.00th=[14091], 60.00th=[15139], 00:13:17.562 | 70.00th=[25297], 80.00th=[27919], 90.00th=[30802], 95.00th=[44303], 00:13:17.562 | 99.00th=[54789], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:13:17.562 | 99.99th=[55313] 00:13:17.562 write: IOPS=3495, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1008msec); 0 zone resets 00:13:17.562 slat (usec): min=3, max=19992, avg=154.03, stdev=994.87 00:13:17.562 clat (usec): min=1842, max=53757, avg=19681.10, stdev=8539.68 00:13:17.562 lat (usec): min=6847, max=53772, avg=19835.13, stdev=8560.04 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 7832], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[12911], 00:13:17.562 | 30.00th=[14877], 40.00th=[16712], 50.00th=[17957], 60.00th=[19530], 00:13:17.562 | 70.00th=[21365], 80.00th=[23725], 90.00th=[32637], 95.00th=[38011], 00:13:17.562 | 99.00th=[46924], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:13:17.562 | 99.99th=[53740] 00:13:17.562 bw ( KiB/s): min=12288, max=14872, per=23.01%, avg=13580.00, stdev=1827.16, samples=2 00:13:17.562 iops : min= 3072, max= 3718, avg=3395.00, stdev=456.79, samples=2 00:13:17.562 lat (msec) : 2=0.02%, 4=3.26%, 10=7.57%, 20=52.18%, 50=35.94% 00:13:17.562 lat (msec) : 100=1.05% 00:13:17.562 cpu : usr=1.69%, sys=4.47%, ctx=306, majf=0, minf=13 00:13:17.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:17.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.562 issued rwts: total=3072,3523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.562 job1: (groupid=0, jobs=1): err= 0: pid=240390: Fri Apr 19 03:26:54 2024 00:13:17.562 read: IOPS=4298, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1002msec) 00:13:17.562 slat (usec): min=2, max=22070, avg=116.59, stdev=747.91 00:13:17.562 clat (usec): min=710, max=45725, avg=14840.09, stdev=7348.83 00:13:17.562 lat (usec): min=3486, max=45733, avg=14956.69, stdev=7375.65 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 5735], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10814], 00:13:17.562 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12387], 60.00th=[12911], 00:13:17.562 | 70.00th=[13829], 80.00th=[17433], 90.00th=[28443], 95.00th=[32375], 00:13:17.562 | 99.00th=[42206], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:13:17.562 | 99.99th=[45876] 00:13:17.562 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:17.562 slat (usec): min=3, max=22859, avg=101.17, stdev=632.40 00:13:17.562 clat (usec): min=5729, max=42541, avg=13627.27, stdev=5611.20 00:13:17.562 lat (usec): min=6311, max=42560, avg=13728.44, stdev=5622.45 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10552], 00:13:17.562 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12125], 00:13:17.562 | 70.00th=[12780], 80.00th=[16581], 90.00th=[21103], 95.00th=[25560], 00:13:17.562 | 99.00th=[33424], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:13:17.562 | 99.99th=[42730] 00:13:17.562 bw ( KiB/s): min=16384, max=20480, per=31.24%, avg=18432.00, stdev=2896.31, samples=2 00:13:17.562 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:17.562 lat (usec) : 750=0.01% 00:13:17.562 lat (msec) : 4=0.36%, 10=13.17%, 20=73.51%, 50=12.96% 00:13:17.562 cpu : usr=3.10%, sys=6.59%, ctx=383, majf=0, minf=11 00:13:17.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:17.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.562 issued rwts: total=4307,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.562 job2: (groupid=0, jobs=1): err= 0: pid=240428: Fri Apr 19 03:26:54 2024 00:13:17.562 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:13:17.562 slat (usec): min=2, max=43951, avg=172.19, stdev=1403.03 00:13:17.562 clat (msec): min=6, max=127, avg=25.03, stdev=22.50 00:13:17.562 lat (msec): min=6, max=130, avg=25.20, stdev=22.60 00:13:17.562 clat percentiles (msec): 00:13:17.562 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 12], 00:13:17.562 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 21], 00:13:17.562 | 70.00th=[ 24], 80.00th=[ 29], 90.00th=[ 58], 95.00th=[ 74], 00:13:17.562 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 128], 99.95th=[ 128], 00:13:17.562 | 99.99th=[ 128] 00:13:17.562 write: IOPS=2634, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1009msec); 0 zone resets 00:13:17.562 slat (usec): min=3, max=37611, avg=197.66, stdev=1134.73 00:13:17.562 clat (usec): min=5874, max=94686, avg=23839.52, stdev=14389.03 00:13:17.562 lat (usec): min=6341, max=94698, avg=24037.18, stdev=14505.68 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 7373], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[10814], 00:13:17.562 | 30.00th=[12256], 40.00th=[19006], 50.00th=[21627], 60.00th=[26084], 00:13:17.562 | 70.00th=[28181], 80.00th=[32637], 90.00th=[39060], 95.00th=[53740], 00:13:17.562 | 99.00th=[85459], 99.50th=[90702], 99.90th=[93848], 99.95th=[94897], 00:13:17.562 | 99.99th=[94897] 00:13:17.562 bw ( KiB/s): min= 9264, max=11216, per=17.35%, avg=10240.00, stdev=1380.27, samples=2 00:13:17.562 iops : min= 2316, max= 2804, avg=2560.00, stdev=345.07, samples=2 00:13:17.562 lat (msec) : 10=9.52%, 20=39.56%, 50=41.93%, 100=7.49%, 250=1.49% 00:13:17.562 cpu : usr=1.79%, sys=3.67%, ctx=303, majf=0, minf=19 00:13:17.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:17.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.562 issued rwts: total=2560,2658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.562 job3: (groupid=0, jobs=1): err= 0: pid=240440: Fri Apr 19 03:26:54 2024 00:13:17.562 read: IOPS=4056, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:13:17.562 slat (usec): min=2, max=14460, avg=106.85, stdev=778.83 00:13:17.562 clat (usec): min=3743, max=56139, avg=15164.89, stdev=5239.83 00:13:17.562 lat (usec): min=3750, max=60808, avg=15271.74, stdev=5277.36 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 6783], 5.00th=[ 8029], 10.00th=[10552], 20.00th=[12387], 00:13:17.562 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14222], 60.00th=[14615], 00:13:17.562 | 70.00th=[15664], 80.00th=[17433], 90.00th=[21627], 95.00th=[23725], 00:13:17.562 | 99.00th=[30802], 99.50th=[49021], 99.90th=[56361], 99.95th=[56361], 00:13:17.562 | 99.99th=[56361] 00:13:17.562 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:13:17.562 slat (usec): min=3, max=16500, avg=102.32, stdev=683.60 00:13:17.562 clat (usec): min=796, max=55776, avg=16081.66, stdev=9101.73 00:13:17.562 lat (usec): min=966, max=59410, avg=16183.98, stdev=9135.90 00:13:17.562 clat percentiles (usec): 00:13:17.562 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 9503], 20.00th=[11994], 00:13:17.563 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:13:17.563 | 70.00th=[14353], 80.00th=[17171], 90.00th=[29230], 95.00th=[40109], 00:13:17.563 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:13:17.563 | 99.99th=[55837] 00:13:17.563 bw ( KiB/s): min=16152, max=16616, per=27.77%, avg=16384.00, stdev=328.10, samples=2 00:13:17.563 iops : min= 4038, max= 4154, avg=4096.00, stdev=82.02, samples=2 00:13:17.563 lat (usec) : 1000=0.04% 00:13:17.563 lat (msec) : 2=0.10%, 4=0.23%, 10=9.54%, 20=74.70%, 50=14.78% 00:13:17.563 lat (msec) : 100=0.61% 00:13:17.563 cpu : usr=3.08%, sys=5.46%, ctx=332, majf=0, minf=7 00:13:17.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:17.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.563 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.563 00:13:17.563 Run status group 0 (all jobs): 00:13:17.563 READ: bw=54.3MiB/s (56.9MB/s), 9.91MiB/s-16.8MiB/s (10.4MB/s-17.6MB/s), io=54.8MiB (57.5MB), run=1002-1009msec 00:13:17.563 WRITE: bw=57.6MiB/s (60.4MB/s), 10.3MiB/s-18.0MiB/s (10.8MB/s-18.8MB/s), io=58.1MiB (61.0MB), run=1002-1009msec 00:13:17.563 00:13:17.563 Disk stats (read/write): 00:13:17.563 nvme0n1: ios=2768/3072, merge=0/0, ticks=15626/14963, in_queue=30589, util=99.00% 00:13:17.563 nvme0n2: ios=3700/4096, merge=0/0, ticks=13975/15309, in_queue=29284, util=87.49% 00:13:17.563 nvme0n3: ios=2104/2480, merge=0/0, ticks=20884/23638, in_queue=44522, util=97.16% 00:13:17.563 nvme0n4: ios=3250/3584, merge=0/0, ticks=30395/36513, in_queue=66908, util=99.15% 00:13:17.563 03:26:54 -- target/fio.sh@55 -- # sync 00:13:17.563 03:26:54 -- target/fio.sh@59 -- # fio_pid=240578 00:13:17.563 03:26:54 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:17.563 03:26:54 -- target/fio.sh@61 -- # sleep 3 00:13:17.563 [global] 00:13:17.563 thread=1 00:13:17.563 invalidate=1 00:13:17.563 rw=read 00:13:17.563 time_based=1 00:13:17.563 runtime=10 00:13:17.563 ioengine=libaio 00:13:17.563 direct=1 00:13:17.563 bs=4096 00:13:17.563 iodepth=1 00:13:17.563 norandommap=1 00:13:17.563 numjobs=1 00:13:17.563 00:13:17.563 [job0] 00:13:17.563 filename=/dev/nvme0n1 00:13:17.563 [job1] 00:13:17.563 filename=/dev/nvme0n2 00:13:17.563 [job2] 00:13:17.563 filename=/dev/nvme0n3 00:13:17.563 [job3] 00:13:17.563 filename=/dev/nvme0n4 00:13:17.563 Could not set queue depth (nvme0n1) 00:13:17.563 Could not set queue depth (nvme0n2) 00:13:17.563 Could not set queue depth (nvme0n3) 00:13:17.563 Could not set queue depth (nvme0n4) 00:13:17.563 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.563 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.563 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.563 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.563 fio-3.35 00:13:17.563 Starting 4 threads 00:13:20.844 03:26:57 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:20.844 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21159936, buflen=4096 00:13:20.844 fio: pid=240677, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:20.844 03:26:58 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:20.844 03:26:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:20.844 03:26:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:20.844 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20140032, buflen=4096 00:13:20.844 fio: pid=240676, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:21.102 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=409600, buflen=4096 00:13:21.102 fio: pid=240672, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:21.103 03:26:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:21.103 03:26:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:21.362 03:26:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:21.362 03:26:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:21.362 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=3305472, buflen=4096 00:13:21.362 fio: pid=240673, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:21.362 00:13:21.362 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=240672: Fri Apr 19 03:26:58 2024 00:13:21.362 read: IOPS=29, BW=118KiB/s (121kB/s)(400KiB/3392msec) 00:13:21.362 slat (usec): min=4, max=11824, avg=204.61, stdev=1352.60 00:13:21.362 clat (usec): min=364, max=42159, avg=33695.45, stdev=16242.02 00:13:21.362 lat (usec): min=376, max=53037, avg=33833.31, stdev=16345.60 00:13:21.362 clat percentiles (usec): 00:13:21.362 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[31065], 00:13:21.362 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:13:21.362 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:21.362 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:21.362 | 99.99th=[42206] 00:13:21.362 bw ( KiB/s): min= 88, max= 240, per=1.00%, avg=120.00, stdev=59.01, samples=6 00:13:21.362 iops : min= 22, max= 60, avg=30.00, stdev=14.75, samples=6 00:13:21.362 lat (usec) : 500=16.83%, 750=1.98% 00:13:21.362 lat (msec) : 50=80.20% 00:13:21.362 cpu : usr=0.00%, sys=0.09%, ctx=105, majf=0, minf=1 00:13:21.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 issued rwts: total=101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.362 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=240673: Fri Apr 19 03:26:58 2024 00:13:21.362 read: IOPS=219, BW=878KiB/s (899kB/s)(3228KiB/3678msec) 00:13:21.362 slat (usec): min=5, max=8892, avg=39.17, stdev=393.86 00:13:21.362 clat (usec): min=302, max=42040, avg=4511.92, stdev=12367.05 00:13:21.362 lat (usec): min=317, max=50054, avg=4551.10, stdev=12431.81 00:13:21.362 clat percentiles (usec): 00:13:21.362 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:13:21.362 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 375], 00:13:21.362 | 70.00th=[ 400], 80.00th=[ 474], 90.00th=[40633], 95.00th=[41681], 00:13:21.362 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:21.362 | 99.99th=[42206] 00:13:21.362 bw ( KiB/s): min= 88, max= 3488, per=7.67%, avg=917.29, stdev=1432.72, samples=7 00:13:21.362 iops : min= 22, max= 872, avg=229.29, stdev=358.21, samples=7 00:13:21.362 lat (usec) : 500=82.67%, 750=7.05% 00:13:21.362 lat (msec) : 20=0.12%, 50=10.02% 00:13:21.362 cpu : usr=0.30%, sys=0.54%, ctx=810, majf=0, minf=1 00:13:21.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 issued rwts: total=808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.362 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=240676: Fri Apr 19 03:26:58 2024 00:13:21.362 read: IOPS=1553, BW=6214KiB/s (6363kB/s)(19.2MiB/3165msec) 00:13:21.362 slat (nsec): min=5469, max=53710, avg=12807.56, stdev=5850.87 00:13:21.362 clat (usec): min=263, max=41290, avg=627.18, stdev=3317.86 00:13:21.362 lat (usec): min=269, max=41298, avg=639.98, stdev=3318.72 00:13:21.362 clat percentiles (usec): 00:13:21.362 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 334], 00:13:21.362 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:13:21.362 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 420], 00:13:21.362 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:21.362 | 99.99th=[41157] 00:13:21.362 bw ( KiB/s): min= 96, max=10712, per=54.80%, avg=6550.67, stdev=5153.88, samples=6 00:13:21.362 iops : min= 24, max= 2678, avg=1637.67, stdev=1288.47, samples=6 00:13:21.362 lat (usec) : 500=98.50%, 750=0.79% 00:13:21.362 lat (msec) : 4=0.02%, 50=0.67% 00:13:21.362 cpu : usr=1.74%, sys=2.78%, ctx=4919, majf=0, minf=1 00:13:21.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.362 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=240677: Fri Apr 19 03:26:58 2024 00:13:21.362 read: IOPS=1790, BW=7160KiB/s (7332kB/s)(20.2MiB/2886msec) 00:13:21.362 slat (nsec): min=4299, max=61972, avg=14007.91, stdev=9621.11 00:13:21.362 clat (usec): min=263, max=41076, avg=541.15, stdev=2644.54 00:13:21.362 lat (usec): min=272, max=41092, avg=555.16, stdev=2645.48 00:13:21.362 clat percentiles (usec): 00:13:21.362 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:13:21.362 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 351], 60.00th=[ 383], 00:13:21.362 | 70.00th=[ 437], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 490], 00:13:21.362 | 99.00th=[ 545], 99.50th=[ 635], 99.90th=[41157], 99.95th=[41157], 00:13:21.362 | 99.99th=[41157] 00:13:21.362 bw ( KiB/s): min= 104, max=11464, per=55.02%, avg=6576.00, stdev=4194.05, samples=5 00:13:21.362 iops : min= 26, max= 2866, avg=1644.00, stdev=1048.51, samples=5 00:13:21.362 lat (usec) : 500=96.21%, 750=3.35% 00:13:21.362 lat (msec) : 50=0.43% 00:13:21.362 cpu : usr=1.07%, sys=3.26%, ctx=5167, majf=0, minf=1 00:13:21.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.362 issued rwts: total=5167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.362 00:13:21.362 Run status group 0 (all jobs): 00:13:21.362 READ: bw=11.7MiB/s (12.2MB/s), 118KiB/s-7160KiB/s (121kB/s-7332kB/s), io=42.9MiB (45.0MB), run=2886-3678msec 00:13:21.362 00:13:21.362 Disk stats (read/write): 00:13:21.362 nvme0n1: ios=142/0, merge=0/0, ticks=4393/0, in_queue=4393, util=99.57% 00:13:21.362 nvme0n2: ios=805/0, merge=0/0, ticks=3551/0, in_queue=3551, util=96.20% 00:13:21.362 nvme0n3: ios=4915/0, merge=0/0, ticks=2939/0, in_queue=2939, util=96.76% 00:13:21.362 nvme0n4: ios=5078/0, merge=0/0, ticks=2696/0, in_queue=2696, util=96.71% 00:13:21.621 03:26:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:21.621 03:26:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:21.878 03:26:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:21.878 03:26:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:22.137 03:26:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:22.137 03:26:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:22.433 03:26:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:22.433 03:26:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:22.691 03:27:00 -- target/fio.sh@69 -- # fio_status=0 00:13:22.691 03:27:00 -- target/fio.sh@70 -- # wait 240578 00:13:22.691 03:27:00 -- target/fio.sh@70 -- # fio_status=4 00:13:22.691 03:27:00 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.691 03:27:00 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.691 03:27:00 -- common/autotest_common.sh@1205 -- # local i=0 00:13:22.691 03:27:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:22.691 03:27:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.691 03:27:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:22.691 03:27:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.691 03:27:00 -- common/autotest_common.sh@1217 -- # return 0 00:13:22.691 03:27:00 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:22.691 03:27:00 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:22.691 nvmf hotplug test: fio failed as expected 00:13:22.691 03:27:00 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.949 03:27:00 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:22.949 03:27:00 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:22.949 03:27:00 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:22.949 03:27:00 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:22.949 03:27:00 -- target/fio.sh@91 -- # nvmftestfini 00:13:22.949 03:27:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:22.949 03:27:00 -- nvmf/common.sh@117 -- # sync 00:13:22.949 03:27:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.949 03:27:00 -- nvmf/common.sh@120 -- # set +e 00:13:22.949 03:27:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.949 03:27:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.949 rmmod nvme_tcp 00:13:22.949 rmmod nvme_fabrics 00:13:22.949 rmmod nvme_keyring 00:13:23.208 03:27:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.208 03:27:00 -- nvmf/common.sh@124 -- # set -e 00:13:23.208 03:27:00 -- nvmf/common.sh@125 -- # return 0 00:13:23.208 03:27:00 -- nvmf/common.sh@478 -- # '[' -n 238560 ']' 00:13:23.208 03:27:00 -- nvmf/common.sh@479 -- # killprocess 238560 00:13:23.208 03:27:00 -- common/autotest_common.sh@936 -- # '[' -z 238560 ']' 00:13:23.208 03:27:00 -- common/autotest_common.sh@940 -- # kill -0 238560 00:13:23.208 03:27:00 -- common/autotest_common.sh@941 -- # uname 00:13:23.208 03:27:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.208 03:27:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 238560 00:13:23.208 03:27:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.208 03:27:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.208 03:27:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 238560' 00:13:23.208 killing process with pid 238560 00:13:23.208 03:27:00 -- common/autotest_common.sh@955 -- # kill 238560 00:13:23.208 03:27:00 -- common/autotest_common.sh@960 -- # wait 238560 00:13:23.465 03:27:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:23.465 03:27:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:23.466 03:27:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:23.466 03:27:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.466 03:27:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.466 03:27:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.466 03:27:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.466 03:27:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.369 03:27:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.369 00:13:25.369 real 0m23.146s 00:13:25.369 user 1m19.978s 00:13:25.369 sys 0m6.507s 00:13:25.369 03:27:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.369 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.369 ************************************ 00:13:25.369 END TEST nvmf_fio_target 00:13:25.369 ************************************ 00:13:25.369 03:27:02 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:25.369 03:27:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.369 03:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.369 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.627 ************************************ 00:13:25.627 START TEST nvmf_bdevio 00:13:25.627 ************************************ 00:13:25.627 03:27:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:25.627 * Looking for test storage... 00:13:25.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.627 03:27:03 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.627 03:27:03 -- nvmf/common.sh@7 -- # uname -s 00:13:25.627 03:27:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.627 03:27:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.627 03:27:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.627 03:27:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.627 03:27:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.627 03:27:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.627 03:27:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.627 03:27:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.627 03:27:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.627 03:27:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.627 03:27:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.627 03:27:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.627 03:27:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.627 03:27:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.627 03:27:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.627 03:27:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.627 03:27:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.627 03:27:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.627 03:27:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.627 03:27:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.627 03:27:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.627 03:27:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.627 03:27:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.627 03:27:03 -- paths/export.sh@5 -- # export PATH 00:13:25.627 03:27:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.627 03:27:03 -- nvmf/common.sh@47 -- # : 0 00:13:25.627 03:27:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.627 03:27:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.627 03:27:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.627 03:27:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.627 03:27:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.627 03:27:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.627 03:27:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.627 03:27:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.627 03:27:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.627 03:27:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:25.627 03:27:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:25.627 03:27:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:25.627 03:27:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.627 03:27:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:25.627 03:27:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:25.627 03:27:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:25.627 03:27:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.627 03:27:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.627 03:27:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.627 03:27:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:25.627 03:27:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:25.627 03:27:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.627 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:13:27.533 03:27:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:27.533 03:27:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.533 03:27:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.533 03:27:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.533 03:27:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.533 03:27:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.533 03:27:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.533 03:27:04 -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.533 03:27:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.533 03:27:04 -- nvmf/common.sh@296 -- # e810=() 00:13:27.533 03:27:04 -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.533 03:27:04 -- nvmf/common.sh@297 -- # x722=() 00:13:27.533 03:27:04 -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.533 03:27:04 -- nvmf/common.sh@298 -- # mlx=() 00:13:27.533 03:27:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.533 03:27:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.533 03:27:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.533 03:27:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.533 03:27:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.533 03:27:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.533 03:27:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.533 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.533 03:27:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.533 03:27:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.533 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.533 03:27:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.533 03:27:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.533 03:27:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.533 03:27:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:27.533 03:27:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.533 03:27:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.533 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.533 03:27:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.533 03:27:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.533 03:27:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.533 03:27:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:27.533 03:27:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.533 03:27:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.533 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.533 03:27:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.533 03:27:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:27.533 03:27:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:27.533 03:27:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:27.533 03:27:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:27.533 03:27:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.533 03:27:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.533 03:27:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.533 03:27:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.533 03:27:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.533 03:27:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.533 03:27:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.533 03:27:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.533 03:27:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.533 03:27:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.533 03:27:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.534 03:27:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.534 03:27:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.534 03:27:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.534 03:27:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.534 03:27:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.534 03:27:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.534 03:27:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.534 03:27:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.792 03:27:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:13:27.792 00:13:27.792 --- 10.0.0.2 ping statistics --- 00:13:27.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.792 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:27.792 03:27:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:27.792 00:13:27.792 --- 10.0.0.1 ping statistics --- 00:13:27.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.792 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:27.792 03:27:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.792 03:27:05 -- nvmf/common.sh@411 -- # return 0 00:13:27.792 03:27:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:27.792 03:27:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.792 03:27:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:27.792 03:27:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:27.792 03:27:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.792 03:27:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:27.792 03:27:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:27.793 03:27:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:27.793 03:27:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:27.793 03:27:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:27.793 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.793 03:27:05 -- nvmf/common.sh@470 -- # nvmfpid=243421 00:13:27.793 03:27:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:27.793 03:27:05 -- nvmf/common.sh@471 -- # waitforlisten 243421 00:13:27.793 03:27:05 -- common/autotest_common.sh@817 -- # '[' -z 243421 ']' 00:13:27.793 03:27:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.793 03:27:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.793 03:27:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.793 03:27:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.793 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.793 [2024-04-19 03:27:05.172463] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:27.793 [2024-04-19 03:27:05.172567] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.793 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.793 [2024-04-19 03:27:05.243269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.052 [2024-04-19 03:27:05.364187] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.052 [2024-04-19 03:27:05.364263] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.052 [2024-04-19 03:27:05.364279] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.052 [2024-04-19 03:27:05.364302] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.052 [2024-04-19 03:27:05.364315] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.052 [2024-04-19 03:27:05.364436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:28.052 [2024-04-19 03:27:05.364492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:28.052 [2024-04-19 03:27:05.364547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:28.052 [2024-04-19 03:27:05.364551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.618 03:27:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.618 03:27:06 -- common/autotest_common.sh@850 -- # return 0 00:13:28.618 03:27:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:28.618 03:27:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:28.618 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:28.619 03:27:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.619 03:27:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.619 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.619 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:28.619 [2024-04-19 03:27:06.142233] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.619 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.619 03:27:06 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.619 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.619 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:28.619 Malloc0 00:13:28.619 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.619 03:27:06 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.619 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.619 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:28.879 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.879 03:27:06 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.879 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.879 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:28.879 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.879 03:27:06 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.879 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.879 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:28.879 [2024-04-19 03:27:06.193449] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.879 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.879 03:27:06 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:28.879 03:27:06 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:28.879 03:27:06 -- nvmf/common.sh@521 -- # config=() 00:13:28.879 03:27:06 -- nvmf/common.sh@521 -- # local subsystem config 00:13:28.879 03:27:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:28.879 03:27:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:28.879 { 00:13:28.879 "params": { 00:13:28.879 "name": "Nvme$subsystem", 00:13:28.879 "trtype": "$TEST_TRANSPORT", 00:13:28.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.879 "adrfam": "ipv4", 00:13:28.879 "trsvcid": "$NVMF_PORT", 00:13:28.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.879 "hdgst": ${hdgst:-false}, 00:13:28.879 "ddgst": ${ddgst:-false} 00:13:28.879 }, 00:13:28.879 "method": "bdev_nvme_attach_controller" 00:13:28.879 } 00:13:28.879 EOF 00:13:28.879 )") 00:13:28.879 03:27:06 -- nvmf/common.sh@543 -- # cat 00:13:28.879 03:27:06 -- nvmf/common.sh@545 -- # jq . 00:13:28.879 03:27:06 -- nvmf/common.sh@546 -- # IFS=, 00:13:28.879 03:27:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:28.879 "params": { 00:13:28.879 "name": "Nvme1", 00:13:28.879 "trtype": "tcp", 00:13:28.879 "traddr": "10.0.0.2", 00:13:28.879 "adrfam": "ipv4", 00:13:28.879 "trsvcid": "4420", 00:13:28.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.879 "hdgst": false, 00:13:28.879 "ddgst": false 00:13:28.879 }, 00:13:28.879 "method": "bdev_nvme_attach_controller" 00:13:28.879 }' 00:13:28.879 [2024-04-19 03:27:06.236237] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:28.879 [2024-04-19 03:27:06.236325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243575 ] 00:13:28.879 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.879 [2024-04-19 03:27:06.298828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.879 [2024-04-19 03:27:06.414319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.879 [2024-04-19 03:27:06.414370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.879 [2024-04-19 03:27:06.414374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.139 I/O targets: 00:13:29.139 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:29.139 00:13:29.139 00:13:29.139 CUnit - A unit testing framework for C - Version 2.1-3 00:13:29.139 http://cunit.sourceforge.net/ 00:13:29.139 00:13:29.139 00:13:29.139 Suite: bdevio tests on: Nvme1n1 00:13:29.139 Test: blockdev write read block ...passed 00:13:29.398 Test: blockdev write zeroes read block ...passed 00:13:29.398 Test: blockdev write zeroes read no split ...passed 00:13:29.398 Test: blockdev write zeroes read split ...passed 00:13:29.398 Test: blockdev write zeroes read split partial ...passed 00:13:29.398 Test: blockdev reset ...[2024-04-19 03:27:06.846261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:29.398 [2024-04-19 03:27:06.846373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2f70 (9): Bad file descriptor 00:13:29.398 [2024-04-19 03:27:06.955804] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:29.398 passed 00:13:29.398 Test: blockdev write read 8 blocks ...passed 00:13:29.658 Test: blockdev write read size > 128k ...passed 00:13:29.658 Test: blockdev write read invalid size ...passed 00:13:29.658 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:29.658 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:29.658 Test: blockdev write read max offset ...passed 00:13:29.658 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:29.658 Test: blockdev writev readv 8 blocks ...passed 00:13:29.658 Test: blockdev writev readv 30 x 1block ...passed 00:13:29.658 Test: blockdev writev readv block ...passed 00:13:29.658 Test: blockdev writev readv size > 128k ...passed 00:13:29.658 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:29.658 Test: blockdev comparev and writev ...[2024-04-19 03:27:07.169551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.169587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.169612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.169629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.170049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.170074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.170096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.170112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.170492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.170516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.170538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.170554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.170925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.170949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:29.658 [2024-04-19 03:27:07.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.658 [2024-04-19 03:27:07.170987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:29.658 passed 00:13:29.917 Test: blockdev nvme passthru rw ...passed 00:13:29.917 Test: blockdev nvme passthru vendor specific ...[2024-04-19 03:27:07.252739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.917 [2024-04-19 03:27:07.252768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:29.917 [2024-04-19 03:27:07.252959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.917 [2024-04-19 03:27:07.252982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:29.917 [2024-04-19 03:27:07.253163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.917 [2024-04-19 03:27:07.253185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:29.917 [2024-04-19 03:27:07.253359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.917 [2024-04-19 03:27:07.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:29.917 passed 00:13:29.917 Test: blockdev nvme admin passthru ...passed 00:13:29.917 Test: blockdev copy ...passed 00:13:29.917 00:13:29.917 Run Summary: Type Total Ran Passed Failed Inactive 00:13:29.917 suites 1 1 n/a 0 0 00:13:29.917 tests 23 23 23 0 0 00:13:29.917 asserts 152 152 152 0 n/a 00:13:29.917 00:13:29.917 Elapsed time = 1.345 seconds 00:13:30.175 03:27:07 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.175 03:27:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.175 03:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.175 03:27:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.175 03:27:07 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:30.175 03:27:07 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:30.175 03:27:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:30.175 03:27:07 -- nvmf/common.sh@117 -- # sync 00:13:30.175 03:27:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.175 03:27:07 -- nvmf/common.sh@120 -- # set +e 00:13:30.175 03:27:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.175 03:27:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.175 rmmod nvme_tcp 00:13:30.175 rmmod nvme_fabrics 00:13:30.175 rmmod nvme_keyring 00:13:30.175 03:27:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.175 03:27:07 -- nvmf/common.sh@124 -- # set -e 00:13:30.175 03:27:07 -- nvmf/common.sh@125 -- # return 0 00:13:30.175 03:27:07 -- nvmf/common.sh@478 -- # '[' -n 243421 ']' 00:13:30.176 03:27:07 -- nvmf/common.sh@479 -- # killprocess 243421 00:13:30.176 03:27:07 -- common/autotest_common.sh@936 -- # '[' -z 243421 ']' 00:13:30.176 03:27:07 -- common/autotest_common.sh@940 -- # kill -0 243421 00:13:30.176 03:27:07 -- common/autotest_common.sh@941 -- # uname 00:13:30.176 03:27:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.176 03:27:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 243421 00:13:30.176 03:27:07 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:30.176 03:27:07 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:30.176 03:27:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 243421' 00:13:30.176 killing process with pid 243421 00:13:30.176 03:27:07 -- common/autotest_common.sh@955 -- # kill 243421 00:13:30.176 03:27:07 -- common/autotest_common.sh@960 -- # wait 243421 00:13:30.435 03:27:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:30.435 03:27:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:30.435 03:27:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:30.435 03:27:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.435 03:27:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.435 03:27:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.435 03:27:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.435 03:27:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.966 03:27:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.966 00:13:32.966 real 0m6.964s 00:13:32.966 user 0m13.185s 00:13:32.966 sys 0m2.088s 00:13:32.966 03:27:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:32.966 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.966 ************************************ 00:13:32.966 END TEST nvmf_bdevio 00:13:32.966 ************************************ 00:13:32.966 03:27:09 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:32.966 03:27:09 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:32.966 03:27:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:32.966 03:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.966 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.966 ************************************ 00:13:32.966 START TEST nvmf_bdevio_no_huge 00:13:32.966 ************************************ 00:13:32.966 03:27:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:32.966 * Looking for test storage... 00:13:32.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.966 03:27:10 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.966 03:27:10 -- nvmf/common.sh@7 -- # uname -s 00:13:32.966 03:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.966 03:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.966 03:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.966 03:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.966 03:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.966 03:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.966 03:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.966 03:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.966 03:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.966 03:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.966 03:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.966 03:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.966 03:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.966 03:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.966 03:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.966 03:27:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.966 03:27:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.966 03:27:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.966 03:27:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.966 03:27:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.966 03:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.967 03:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.967 03:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.967 03:27:10 -- paths/export.sh@5 -- # export PATH 00:13:32.967 03:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.967 03:27:10 -- nvmf/common.sh@47 -- # : 0 00:13:32.967 03:27:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.967 03:27:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.967 03:27:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.967 03:27:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.967 03:27:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.967 03:27:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.967 03:27:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.967 03:27:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.967 03:27:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.967 03:27:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.967 03:27:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:32.967 03:27:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:32.967 03:27:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.967 03:27:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:32.967 03:27:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:32.967 03:27:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:32.967 03:27:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.967 03:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.967 03:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.967 03:27:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:32.967 03:27:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:32.967 03:27:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.967 03:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 03:27:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.870 03:27:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.870 03:27:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.870 03:27:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.870 03:27:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.870 03:27:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.870 03:27:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.870 03:27:12 -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.870 03:27:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.870 03:27:12 -- nvmf/common.sh@296 -- # e810=() 00:13:34.870 03:27:12 -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.870 03:27:12 -- nvmf/common.sh@297 -- # x722=() 00:13:34.870 03:27:12 -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.870 03:27:12 -- nvmf/common.sh@298 -- # mlx=() 00:13:34.870 03:27:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.870 03:27:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.870 03:27:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.870 03:27:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.870 03:27:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.870 03:27:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:27:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:34.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:34.870 03:27:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:27:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:34.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:34.870 03:27:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.870 03:27:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:27:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.870 03:27:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.870 03:27:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.870 03:27:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:34.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:34.870 03:27:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.870 03:27:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:27:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.870 03:27:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.870 03:27:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.870 03:27:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:34.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:34.870 03:27:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.870 03:27:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:34.870 03:27:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:34.870 03:27:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:34.870 03:27:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.870 03:27:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.870 03:27:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.870 03:27:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.870 03:27:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.870 03:27:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.870 03:27:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.870 03:27:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.870 03:27:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.870 03:27:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.870 03:27:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.870 03:27:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.870 03:27:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.870 03:27:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.870 03:27:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.870 03:27:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.870 03:27:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.870 03:27:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.870 03:27:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.870 03:27:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:13:34.870 00:13:34.870 --- 10.0.0.2 ping statistics --- 00:13:34.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.870 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:34.870 03:27:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:34.870 00:13:34.870 --- 10.0.0.1 ping statistics --- 00:13:34.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.870 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:34.870 03:27:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.870 03:27:12 -- nvmf/common.sh@411 -- # return 0 00:13:34.870 03:27:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:34.870 03:27:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.870 03:27:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:34.870 03:27:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.870 03:27:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:34.870 03:27:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:34.870 03:27:12 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:34.870 03:27:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:34.870 03:27:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:34.870 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 03:27:12 -- nvmf/common.sh@470 -- # nvmfpid=246162 00:13:34.870 03:27:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:34.870 03:27:12 -- nvmf/common.sh@471 -- # waitforlisten 246162 00:13:34.870 03:27:12 -- common/autotest_common.sh@817 -- # '[' -z 246162 ']' 00:13:34.870 03:27:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.870 03:27:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:34.870 03:27:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.871 03:27:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:34.871 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:34.871 [2024-04-19 03:27:12.298107] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:34.871 [2024-04-19 03:27:12.298182] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:34.871 [2024-04-19 03:27:12.370318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.129 [2024-04-19 03:27:12.475830] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.129 [2024-04-19 03:27:12.475885] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.129 [2024-04-19 03:27:12.475923] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.129 [2024-04-19 03:27:12.475935] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.129 [2024-04-19 03:27:12.475945] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.129 [2024-04-19 03:27:12.476041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:35.129 [2024-04-19 03:27:12.476114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:35.129 [2024-04-19 03:27:12.476169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:35.129 [2024-04-19 03:27:12.476171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.129 03:27:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:35.129 03:27:12 -- common/autotest_common.sh@850 -- # return 0 00:13:35.129 03:27:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:35.129 03:27:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:35.129 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 03:27:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.129 03:27:12 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.129 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.129 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 [2024-04-19 03:27:12.608412] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.129 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.129 03:27:12 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.129 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.129 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 Malloc0 00:13:35.129 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.130 03:27:12 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.130 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.130 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.130 03:27:12 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.130 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.130 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.130 03:27:12 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.130 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.130 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 [2024-04-19 03:27:12.646566] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.130 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.130 03:27:12 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:35.130 03:27:12 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:35.130 03:27:12 -- nvmf/common.sh@521 -- # config=() 00:13:35.130 03:27:12 -- nvmf/common.sh@521 -- # local subsystem config 00:13:35.130 03:27:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:35.130 03:27:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:35.130 { 00:13:35.130 "params": { 00:13:35.130 "name": "Nvme$subsystem", 00:13:35.130 "trtype": "$TEST_TRANSPORT", 00:13:35.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:35.130 "adrfam": "ipv4", 00:13:35.130 "trsvcid": "$NVMF_PORT", 00:13:35.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:35.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:35.130 "hdgst": ${hdgst:-false}, 00:13:35.130 "ddgst": ${ddgst:-false} 00:13:35.130 }, 00:13:35.130 "method": "bdev_nvme_attach_controller" 00:13:35.130 } 00:13:35.130 EOF 00:13:35.130 )") 00:13:35.130 03:27:12 -- nvmf/common.sh@543 -- # cat 00:13:35.130 03:27:12 -- nvmf/common.sh@545 -- # jq . 00:13:35.130 03:27:12 -- nvmf/common.sh@546 -- # IFS=, 00:13:35.130 03:27:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:35.130 "params": { 00:13:35.130 "name": "Nvme1", 00:13:35.130 "trtype": "tcp", 00:13:35.130 "traddr": "10.0.0.2", 00:13:35.130 "adrfam": "ipv4", 00:13:35.130 "trsvcid": "4420", 00:13:35.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.130 "hdgst": false, 00:13:35.130 "ddgst": false 00:13:35.130 }, 00:13:35.130 "method": "bdev_nvme_attach_controller" 00:13:35.130 }' 00:13:35.388 [2024-04-19 03:27:12.696244] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:35.388 [2024-04-19 03:27:12.696336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid246201 ] 00:13:35.388 [2024-04-19 03:27:12.764401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.388 [2024-04-19 03:27:12.880460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.388 [2024-04-19 03:27:12.880520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.388 [2024-04-19 03:27:12.880522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.646 I/O targets: 00:13:35.646 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:35.646 00:13:35.646 00:13:35.646 CUnit - A unit testing framework for C - Version 2.1-3 00:13:35.646 http://cunit.sourceforge.net/ 00:13:35.646 00:13:35.646 00:13:35.646 Suite: bdevio tests on: Nvme1n1 00:13:35.904 Test: blockdev write read block ...passed 00:13:35.904 Test: blockdev write zeroes read block ...passed 00:13:35.904 Test: blockdev write zeroes read no split ...passed 00:13:35.904 Test: blockdev write zeroes read split ...passed 00:13:35.904 Test: blockdev write zeroes read split partial ...passed 00:13:35.904 Test: blockdev reset ...[2024-04-19 03:27:13.379846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:35.904 [2024-04-19 03:27:13.379975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa245c0 (9): Bad file descriptor 00:13:35.904 [2024-04-19 03:27:13.391293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:35.904 passed 00:13:35.904 Test: blockdev write read 8 blocks ...passed 00:13:35.904 Test: blockdev write read size > 128k ...passed 00:13:35.904 Test: blockdev write read invalid size ...passed 00:13:35.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:35.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:35.904 Test: blockdev write read max offset ...passed 00:13:36.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:36.162 Test: blockdev writev readv 8 blocks ...passed 00:13:36.162 Test: blockdev writev readv 30 x 1block ...passed 00:13:36.162 Test: blockdev writev readv block ...passed 00:13:36.162 Test: blockdev writev readv size > 128k ...passed 00:13:36.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:36.162 Test: blockdev comparev and writev ...[2024-04-19 03:27:13.567474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.567510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.567534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.567550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.567920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.567945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.567967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.567983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.568360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.568392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.568415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.568432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.568797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.568821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.568842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.162 [2024-04-19 03:27:13.568864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:36.162 passed 00:13:36.162 Test: blockdev nvme passthru rw ...passed 00:13:36.162 Test: blockdev nvme passthru vendor specific ...[2024-04-19 03:27:13.651726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.162 [2024-04-19 03:27:13.651753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.651939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.162 [2024-04-19 03:27:13.651964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.652143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.162 [2024-04-19 03:27:13.652166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:36.162 [2024-04-19 03:27:13.652352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.162 [2024-04-19 03:27:13.652376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:36.162 passed 00:13:36.162 Test: blockdev nvme admin passthru ...passed 00:13:36.162 Test: blockdev copy ...passed 00:13:36.162 00:13:36.162 Run Summary: Type Total Ran Passed Failed Inactive 00:13:36.162 suites 1 1 n/a 0 0 00:13:36.162 tests 23 23 23 0 0 00:13:36.162 asserts 152 152 152 0 n/a 00:13:36.162 00:13:36.162 Elapsed time = 1.099 seconds 00:13:36.728 03:27:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.728 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.729 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:13:36.729 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.729 03:27:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:36.729 03:27:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:36.729 03:27:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:36.729 03:27:14 -- nvmf/common.sh@117 -- # sync 00:13:36.729 03:27:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.729 03:27:14 -- nvmf/common.sh@120 -- # set +e 00:13:36.729 03:27:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.729 03:27:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.729 rmmod nvme_tcp 00:13:36.729 rmmod nvme_fabrics 00:13:36.729 rmmod nvme_keyring 00:13:36.729 03:27:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.729 03:27:14 -- nvmf/common.sh@124 -- # set -e 00:13:36.729 03:27:14 -- nvmf/common.sh@125 -- # return 0 00:13:36.729 03:27:14 -- nvmf/common.sh@478 -- # '[' -n 246162 ']' 00:13:36.729 03:27:14 -- nvmf/common.sh@479 -- # killprocess 246162 00:13:36.729 03:27:14 -- common/autotest_common.sh@936 -- # '[' -z 246162 ']' 00:13:36.729 03:27:14 -- common/autotest_common.sh@940 -- # kill -0 246162 00:13:36.729 03:27:14 -- common/autotest_common.sh@941 -- # uname 00:13:36.729 03:27:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.729 03:27:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 246162 00:13:36.729 03:27:14 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:36.729 03:27:14 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:36.729 03:27:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 246162' 00:13:36.729 killing process with pid 246162 00:13:36.729 03:27:14 -- common/autotest_common.sh@955 -- # kill 246162 00:13:36.729 03:27:14 -- common/autotest_common.sh@960 -- # wait 246162 00:13:37.297 03:27:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:37.297 03:27:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:37.297 03:27:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:37.297 03:27:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.297 03:27:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.297 03:27:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.297 03:27:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.297 03:27:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.203 03:27:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.203 00:13:39.203 real 0m6.578s 00:13:39.203 user 0m11.106s 00:13:39.203 sys 0m2.505s 00:13:39.203 03:27:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.203 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:13:39.203 ************************************ 00:13:39.203 END TEST nvmf_bdevio_no_huge 00:13:39.203 ************************************ 00:13:39.203 03:27:16 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:39.203 03:27:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.203 03:27:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.203 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:13:39.462 ************************************ 00:13:39.462 START TEST nvmf_tls 00:13:39.462 ************************************ 00:13:39.462 03:27:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:39.462 * Looking for test storage... 00:13:39.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.462 03:27:16 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.462 03:27:16 -- nvmf/common.sh@7 -- # uname -s 00:13:39.462 03:27:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.462 03:27:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.462 03:27:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.462 03:27:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.462 03:27:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.462 03:27:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.462 03:27:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.462 03:27:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.462 03:27:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.462 03:27:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.462 03:27:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.462 03:27:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.462 03:27:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.462 03:27:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.462 03:27:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.462 03:27:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.462 03:27:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.462 03:27:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.462 03:27:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.462 03:27:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.462 03:27:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.462 03:27:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.462 03:27:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.462 03:27:16 -- paths/export.sh@5 -- # export PATH 00:13:39.462 03:27:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.462 03:27:16 -- nvmf/common.sh@47 -- # : 0 00:13:39.462 03:27:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.462 03:27:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.462 03:27:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.462 03:27:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.462 03:27:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.462 03:27:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.462 03:27:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.462 03:27:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.462 03:27:16 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.462 03:27:16 -- target/tls.sh@62 -- # nvmftestinit 00:13:39.462 03:27:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:39.462 03:27:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.462 03:27:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:39.462 03:27:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:39.462 03:27:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:39.462 03:27:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.462 03:27:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.462 03:27:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.462 03:27:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:39.462 03:27:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:39.462 03:27:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.462 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:13:41.376 03:27:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:41.376 03:27:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.376 03:27:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.376 03:27:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.376 03:27:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.376 03:27:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.376 03:27:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.376 03:27:18 -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.376 03:27:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.376 03:27:18 -- nvmf/common.sh@296 -- # e810=() 00:13:41.376 03:27:18 -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.376 03:27:18 -- nvmf/common.sh@297 -- # x722=() 00:13:41.376 03:27:18 -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.376 03:27:18 -- nvmf/common.sh@298 -- # mlx=() 00:13:41.376 03:27:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.376 03:27:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.376 03:27:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.376 03:27:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.376 03:27:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.376 03:27:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.376 03:27:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:41.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:41.376 03:27:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.376 03:27:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:41.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:41.376 03:27:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.376 03:27:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.376 03:27:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.376 03:27:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.376 03:27:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.376 03:27:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:41.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:41.376 03:27:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.376 03:27:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.376 03:27:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.376 03:27:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.376 03:27:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.376 03:27:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:41.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:41.376 03:27:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.376 03:27:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:41.376 03:27:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:41.376 03:27:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:41.376 03:27:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:41.376 03:27:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.376 03:27:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.376 03:27:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.376 03:27:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.376 03:27:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.376 03:27:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.376 03:27:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.376 03:27:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.376 03:27:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.376 03:27:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.376 03:27:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.376 03:27:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.376 03:27:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.376 03:27:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.376 03:27:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.376 03:27:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.376 03:27:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.635 03:27:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.635 03:27:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.635 03:27:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:41.635 00:13:41.635 --- 10.0.0.2 ping statistics --- 00:13:41.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.635 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:41.635 03:27:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:41.635 00:13:41.635 --- 10.0.0.1 ping statistics --- 00:13:41.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.635 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:41.635 03:27:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.635 03:27:18 -- nvmf/common.sh@411 -- # return 0 00:13:41.635 03:27:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:41.635 03:27:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.635 03:27:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:41.635 03:27:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:41.635 03:27:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.635 03:27:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:41.635 03:27:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:41.635 03:27:18 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:41.635 03:27:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:41.635 03:27:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:41.635 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 03:27:18 -- nvmf/common.sh@470 -- # nvmfpid=248379 00:13:41.635 03:27:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:41.635 03:27:18 -- nvmf/common.sh@471 -- # waitforlisten 248379 00:13:41.635 03:27:18 -- common/autotest_common.sh@817 -- # '[' -z 248379 ']' 00:13:41.635 03:27:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.635 03:27:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.635 03:27:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.635 03:27:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.635 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 [2024-04-19 03:27:19.020846] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:41.635 [2024-04-19 03:27:19.020918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.635 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.635 [2024-04-19 03:27:19.090134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.893 [2024-04-19 03:27:19.205340] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.893 [2024-04-19 03:27:19.205403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.893 [2024-04-19 03:27:19.205431] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.893 [2024-04-19 03:27:19.205444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.893 [2024-04-19 03:27:19.205456] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.893 [2024-04-19 03:27:19.205498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.893 03:27:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.893 03:27:19 -- common/autotest_common.sh@850 -- # return 0 00:13:41.893 03:27:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.893 03:27:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.893 03:27:19 -- common/autotest_common.sh@10 -- # set +x 00:13:41.893 03:27:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.893 03:27:19 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:41.893 03:27:19 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:42.151 true 00:13:42.151 03:27:19 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:42.151 03:27:19 -- target/tls.sh@73 -- # jq -r .tls_version 00:13:42.448 03:27:19 -- target/tls.sh@73 -- # version=0 00:13:42.448 03:27:19 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:42.448 03:27:19 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:42.707 03:27:20 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:42.707 03:27:20 -- target/tls.sh@81 -- # jq -r .tls_version 00:13:42.965 03:27:20 -- target/tls.sh@81 -- # version=13 00:13:42.965 03:27:20 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:42.965 03:27:20 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:43.223 03:27:20 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:43.223 03:27:20 -- target/tls.sh@89 -- # jq -r .tls_version 00:13:43.223 03:27:20 -- target/tls.sh@89 -- # version=7 00:13:43.223 03:27:20 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:43.223 03:27:20 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:43.223 03:27:20 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:43.482 03:27:21 -- target/tls.sh@96 -- # ktls=false 00:13:43.482 03:27:21 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:43.482 03:27:21 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:43.740 03:27:21 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:43.740 03:27:21 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:43.998 03:27:21 -- target/tls.sh@104 -- # ktls=true 00:13:43.998 03:27:21 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:43.998 03:27:21 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:44.257 03:27:21 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:44.257 03:27:21 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:44.515 03:27:22 -- target/tls.sh@112 -- # ktls=false 00:13:44.515 03:27:22 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:44.515 03:27:22 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:44.515 03:27:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:44.515 03:27:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:44.515 03:27:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:44.515 03:27:22 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:13:44.515 03:27:22 -- nvmf/common.sh@693 -- # digest=1 00:13:44.515 03:27:22 -- nvmf/common.sh@694 -- # python - 00:13:44.773 03:27:22 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:44.773 03:27:22 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:44.773 03:27:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:44.773 03:27:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:44.773 03:27:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:44.773 03:27:22 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:13:44.773 03:27:22 -- nvmf/common.sh@693 -- # digest=1 00:13:44.773 03:27:22 -- nvmf/common.sh@694 -- # python - 00:13:44.773 03:27:22 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:44.773 03:27:22 -- target/tls.sh@121 -- # mktemp 00:13:44.773 03:27:22 -- target/tls.sh@121 -- # key_path=/tmp/tmp.2nwlvtPMTf 00:13:44.773 03:27:22 -- target/tls.sh@122 -- # mktemp 00:13:44.773 03:27:22 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.tGR07DitC0 00:13:44.773 03:27:22 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:44.773 03:27:22 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:44.773 03:27:22 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2nwlvtPMTf 00:13:44.773 03:27:22 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tGR07DitC0 00:13:44.773 03:27:22 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:45.032 03:27:22 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:13:45.292 03:27:22 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2nwlvtPMTf 00:13:45.292 03:27:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.2nwlvtPMTf 00:13:45.292 03:27:22 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:45.551 [2024-04-19 03:27:22.981669] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.551 03:27:23 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:45.808 03:27:23 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:46.065 [2024-04-19 03:27:23.523104] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.065 [2024-04-19 03:27:23.523343] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.066 03:27:23 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.323 malloc0 00:13:46.323 03:27:23 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:46.581 03:27:24 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2nwlvtPMTf 00:13:46.840 [2024-04-19 03:27:24.272801] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:46.840 03:27:24 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2nwlvtPMTf 00:13:46.840 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.877 Initializing NVMe Controllers 00:13:56.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:56.878 Initialization complete. Launching workers. 00:13:56.878 ======================================================== 00:13:56.878 Latency(us) 00:13:56.878 Device Information : IOPS MiB/s Average min max 00:13:56.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7663.60 29.94 8353.96 1049.26 9462.98 00:13:56.878 ======================================================== 00:13:56.878 Total : 7663.60 29.94 8353.96 1049.26 9462.98 00:13:56.878 00:13:56.878 03:27:34 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2nwlvtPMTf 00:13:56.878 03:27:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:56.878 03:27:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:56.878 03:27:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:56.878 03:27:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2nwlvtPMTf' 00:13:56.878 03:27:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:56.878 03:27:34 -- target/tls.sh@28 -- # bdevperf_pid=250279 00:13:56.878 03:27:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:56.878 03:27:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:56.878 03:27:34 -- target/tls.sh@31 -- # waitforlisten 250279 /var/tmp/bdevperf.sock 00:13:56.878 03:27:34 -- common/autotest_common.sh@817 -- # '[' -z 250279 ']' 00:13:56.878 03:27:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.878 03:27:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:56.878 03:27:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.878 03:27:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:56.878 03:27:34 -- common/autotest_common.sh@10 -- # set +x 00:13:57.135 [2024-04-19 03:27:34.439976] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:13:57.135 [2024-04-19 03:27:34.440065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250279 ] 00:13:57.135 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.135 [2024-04-19 03:27:34.497377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.135 [2024-04-19 03:27:34.604600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.393 03:27:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:57.393 03:27:34 -- common/autotest_common.sh@850 -- # return 0 00:13:57.393 03:27:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2nwlvtPMTf 00:13:57.651 [2024-04-19 03:27:34.979970] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:57.651 [2024-04-19 03:27:34.980094] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:57.651 TLSTESTn1 00:13:57.651 03:27:35 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:57.651 Running I/O for 10 seconds... 00:14:09.847 00:14:09.847 Latency(us) 00:14:09.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.847 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:09.847 Verification LBA range: start 0x0 length 0x2000 00:14:09.847 TLSTESTn1 : 10.05 2464.73 9.63 0.00 0.00 51800.58 8641.04 79614.10 00:14:09.847 =================================================================================================================== 00:14:09.847 Total : 2464.73 9.63 0.00 0.00 51800.58 8641.04 79614.10 00:14:09.847 0 00:14:09.847 03:27:45 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:09.847 03:27:45 -- target/tls.sh@45 -- # killprocess 250279 00:14:09.847 03:27:45 -- common/autotest_common.sh@936 -- # '[' -z 250279 ']' 00:14:09.847 03:27:45 -- common/autotest_common.sh@940 -- # kill -0 250279 00:14:09.847 03:27:45 -- common/autotest_common.sh@941 -- # uname 00:14:09.847 03:27:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.847 03:27:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 250279 00:14:09.847 03:27:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:09.847 03:27:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:09.847 03:27:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 250279' 00:14:09.847 killing process with pid 250279 00:14:09.847 03:27:45 -- common/autotest_common.sh@955 -- # kill 250279 00:14:09.847 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.847 00:14:09.848 Latency(us) 00:14:09.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.848 =================================================================================================================== 00:14:09.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.848 [2024-04-19 03:27:45.287582] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:09.848 03:27:45 -- common/autotest_common.sh@960 -- # wait 250279 00:14:09.848 03:27:45 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tGR07DitC0 00:14:09.848 03:27:45 -- common/autotest_common.sh@638 -- # local es=0 00:14:09.848 03:27:45 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tGR07DitC0 00:14:09.848 03:27:45 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:09.848 03:27:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:09.848 03:27:45 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:09.848 03:27:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:09.848 03:27:45 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tGR07DitC0 00:14:09.848 03:27:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.848 03:27:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:09.848 03:27:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:09.848 03:27:45 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tGR07DitC0' 00:14:09.848 03:27:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.848 03:27:45 -- target/tls.sh@28 -- # bdevperf_pid=251482 00:14:09.848 03:27:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.848 03:27:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.848 03:27:45 -- target/tls.sh@31 -- # waitforlisten 251482 /var/tmp/bdevperf.sock 00:14:09.848 03:27:45 -- common/autotest_common.sh@817 -- # '[' -z 251482 ']' 00:14:09.848 03:27:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.848 03:27:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.848 03:27:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.848 03:27:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.848 03:27:45 -- common/autotest_common.sh@10 -- # set +x 00:14:09.848 [2024-04-19 03:27:45.590351] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:09.848 [2024-04-19 03:27:45.590448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251482 ] 00:14:09.848 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.848 [2024-04-19 03:27:45.651337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.848 [2024-04-19 03:27:45.762024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.848 03:27:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.848 03:27:45 -- common/autotest_common.sh@850 -- # return 0 00:14:09.848 03:27:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tGR07DitC0 00:14:09.848 [2024-04-19 03:27:46.130999] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:09.848 [2024-04-19 03:27:46.131131] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:09.848 [2024-04-19 03:27:46.138496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:09.848 [2024-04-19 03:27:46.138935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124e230 (107): Transport endpoint is not connected 00:14:09.848 [2024-04-19 03:27:46.139924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124e230 (9): Bad file descriptor 00:14:09.848 [2024-04-19 03:27:46.140923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:09.848 [2024-04-19 03:27:46.140944] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:09.848 [2024-04-19 03:27:46.140961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:09.848 request: 00:14:09.848 { 00:14:09.848 "name": "TLSTEST", 00:14:09.848 "trtype": "tcp", 00:14:09.848 "traddr": "10.0.0.2", 00:14:09.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.848 "adrfam": "ipv4", 00:14:09.848 "trsvcid": "4420", 00:14:09.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.848 "psk": "/tmp/tmp.tGR07DitC0", 00:14:09.848 "method": "bdev_nvme_attach_controller", 00:14:09.848 "req_id": 1 00:14:09.848 } 00:14:09.848 Got JSON-RPC error response 00:14:09.848 response: 00:14:09.848 { 00:14:09.848 "code": -32602, 00:14:09.848 "message": "Invalid parameters" 00:14:09.848 } 00:14:09.848 03:27:46 -- target/tls.sh@36 -- # killprocess 251482 00:14:09.848 03:27:46 -- common/autotest_common.sh@936 -- # '[' -z 251482 ']' 00:14:09.848 03:27:46 -- common/autotest_common.sh@940 -- # kill -0 251482 00:14:09.848 03:27:46 -- common/autotest_common.sh@941 -- # uname 00:14:09.848 03:27:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.848 03:27:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 251482 00:14:09.848 03:27:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:09.848 03:27:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:09.848 03:27:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 251482' 00:14:09.848 killing process with pid 251482 00:14:09.848 03:27:46 -- common/autotest_common.sh@955 -- # kill 251482 00:14:09.848 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.848 00:14:09.848 Latency(us) 00:14:09.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.848 =================================================================================================================== 00:14:09.848 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.848 [2024-04-19 03:27:46.191559] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:09.848 03:27:46 -- common/autotest_common.sh@960 -- # wait 251482 00:14:09.848 03:27:46 -- target/tls.sh@37 -- # return 1 00:14:09.848 03:27:46 -- common/autotest_common.sh@641 -- # es=1 00:14:09.848 03:27:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:09.848 03:27:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:09.848 03:27:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:09.848 03:27:46 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2nwlvtPMTf 00:14:09.848 03:27:46 -- common/autotest_common.sh@638 -- # local es=0 00:14:09.848 03:27:46 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2nwlvtPMTf 00:14:09.848 03:27:46 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:09.848 03:27:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:09.848 03:27:46 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:09.848 03:27:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:09.848 03:27:46 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2nwlvtPMTf 00:14:09.848 03:27:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.848 03:27:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:09.848 03:27:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:09.848 03:27:46 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2nwlvtPMTf' 00:14:09.848 03:27:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.848 03:27:46 -- target/tls.sh@28 -- # bdevperf_pid=251620 00:14:09.848 03:27:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.848 03:27:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.848 03:27:46 -- target/tls.sh@31 -- # waitforlisten 251620 /var/tmp/bdevperf.sock 00:14:09.849 03:27:46 -- common/autotest_common.sh@817 -- # '[' -z 251620 ']' 00:14:09.849 03:27:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.849 03:27:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.849 03:27:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.849 03:27:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.849 03:27:46 -- common/autotest_common.sh@10 -- # set +x 00:14:09.849 [2024-04-19 03:27:46.497948] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:09.849 [2024-04-19 03:27:46.498035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251620 ] 00:14:09.849 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.849 [2024-04-19 03:27:46.556288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.849 [2024-04-19 03:27:46.659922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.849 03:27:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.849 03:27:46 -- common/autotest_common.sh@850 -- # return 0 00:14:09.849 03:27:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2nwlvtPMTf 00:14:09.849 [2024-04-19 03:27:46.990788] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:09.849 [2024-04-19 03:27:46.990914] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:09.849 [2024-04-19 03:27:47.000535] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:09.849 [2024-04-19 03:27:47.000566] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:09.849 [2024-04-19 03:27:47.000603] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:09.849 [2024-04-19 03:27:47.000925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb70230 (107): Transport endpoint is not connected 00:14:09.849 [2024-04-19 03:27:47.001915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb70230 (9): Bad file descriptor 00:14:09.849 [2024-04-19 03:27:47.002914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:09.849 [2024-04-19 03:27:47.002935] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:09.849 [2024-04-19 03:27:47.002947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:09.849 request: 00:14:09.849 { 00:14:09.849 "name": "TLSTEST", 00:14:09.849 "trtype": "tcp", 00:14:09.849 "traddr": "10.0.0.2", 00:14:09.849 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:09.849 "adrfam": "ipv4", 00:14:09.849 "trsvcid": "4420", 00:14:09.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.849 "psk": "/tmp/tmp.2nwlvtPMTf", 00:14:09.849 "method": "bdev_nvme_attach_controller", 00:14:09.849 "req_id": 1 00:14:09.849 } 00:14:09.849 Got JSON-RPC error response 00:14:09.849 response: 00:14:09.849 { 00:14:09.849 "code": -32602, 00:14:09.849 "message": "Invalid parameters" 00:14:09.849 } 00:14:09.849 03:27:47 -- target/tls.sh@36 -- # killprocess 251620 00:14:09.849 03:27:47 -- common/autotest_common.sh@936 -- # '[' -z 251620 ']' 00:14:09.849 03:27:47 -- common/autotest_common.sh@940 -- # kill -0 251620 00:14:09.849 03:27:47 -- common/autotest_common.sh@941 -- # uname 00:14:09.849 03:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.849 03:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 251620 00:14:09.849 03:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:09.849 03:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:09.849 03:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 251620' 00:14:09.849 killing process with pid 251620 00:14:09.849 03:27:47 -- common/autotest_common.sh@955 -- # kill 251620 00:14:09.849 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.849 00:14:09.849 Latency(us) 00:14:09.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.849 =================================================================================================================== 00:14:09.849 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.849 [2024-04-19 03:27:47.054908] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:09.849 03:27:47 -- common/autotest_common.sh@960 -- # wait 251620 00:14:09.849 03:27:47 -- target/tls.sh@37 -- # return 1 00:14:09.849 03:27:47 -- common/autotest_common.sh@641 -- # es=1 00:14:09.849 03:27:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:09.849 03:27:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:09.849 03:27:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:09.849 03:27:47 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2nwlvtPMTf 00:14:09.849 03:27:47 -- common/autotest_common.sh@638 -- # local es=0 00:14:09.849 03:27:47 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2nwlvtPMTf 00:14:09.849 03:27:47 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:09.849 03:27:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:09.849 03:27:47 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:09.849 03:27:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:09.849 03:27:47 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2nwlvtPMTf 00:14:09.849 03:27:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.849 03:27:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:09.849 03:27:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:09.849 03:27:47 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2nwlvtPMTf' 00:14:09.849 03:27:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.849 03:27:47 -- target/tls.sh@28 -- # bdevperf_pid=251754 00:14:09.849 03:27:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.849 03:27:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.849 03:27:47 -- target/tls.sh@31 -- # waitforlisten 251754 /var/tmp/bdevperf.sock 00:14:09.849 03:27:47 -- common/autotest_common.sh@817 -- # '[' -z 251754 ']' 00:14:09.849 03:27:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.849 03:27:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.849 03:27:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.849 03:27:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.849 03:27:47 -- common/autotest_common.sh@10 -- # set +x 00:14:09.849 [2024-04-19 03:27:47.361971] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:09.849 [2024-04-19 03:27:47.362058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251754 ] 00:14:09.849 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.108 [2024-04-19 03:27:47.421317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.108 [2024-04-19 03:27:47.528087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.108 03:27:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:10.108 03:27:47 -- common/autotest_common.sh@850 -- # return 0 00:14:10.108 03:27:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2nwlvtPMTf 00:14:10.365 [2024-04-19 03:27:47.873767] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.365 [2024-04-19 03:27:47.873903] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:10.365 [2024-04-19 03:27:47.882812] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:10.365 [2024-04-19 03:27:47.882842] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:10.365 [2024-04-19 03:27:47.882892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:10.365 [2024-04-19 03:27:47.883871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133a230 (107): Transport endpoint is not connected 00:14:10.365 [2024-04-19 03:27:47.884863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133a230 (9): Bad file descriptor 00:14:10.365 [2024-04-19 03:27:47.885862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:10.365 [2024-04-19 03:27:47.885892] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:10.365 [2024-04-19 03:27:47.885904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:10.365 request: 00:14:10.365 { 00:14:10.365 "name": "TLSTEST", 00:14:10.365 "trtype": "tcp", 00:14:10.365 "traddr": "10.0.0.2", 00:14:10.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.365 "adrfam": "ipv4", 00:14:10.365 "trsvcid": "4420", 00:14:10.365 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:10.365 "psk": "/tmp/tmp.2nwlvtPMTf", 00:14:10.365 "method": "bdev_nvme_attach_controller", 00:14:10.365 "req_id": 1 00:14:10.365 } 00:14:10.365 Got JSON-RPC error response 00:14:10.365 response: 00:14:10.365 { 00:14:10.365 "code": -32602, 00:14:10.365 "message": "Invalid parameters" 00:14:10.365 } 00:14:10.365 03:27:47 -- target/tls.sh@36 -- # killprocess 251754 00:14:10.365 03:27:47 -- common/autotest_common.sh@936 -- # '[' -z 251754 ']' 00:14:10.366 03:27:47 -- common/autotest_common.sh@940 -- # kill -0 251754 00:14:10.366 03:27:47 -- common/autotest_common.sh@941 -- # uname 00:14:10.366 03:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.366 03:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 251754 00:14:10.623 03:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:10.623 03:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:10.623 03:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 251754' 00:14:10.623 killing process with pid 251754 00:14:10.623 03:27:47 -- common/autotest_common.sh@955 -- # kill 251754 00:14:10.623 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.623 00:14:10.623 Latency(us) 00:14:10.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.623 =================================================================================================================== 00:14:10.623 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.623 [2024-04-19 03:27:47.935238] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:10.623 03:27:47 -- common/autotest_common.sh@960 -- # wait 251754 00:14:10.882 03:27:48 -- target/tls.sh@37 -- # return 1 00:14:10.882 03:27:48 -- common/autotest_common.sh@641 -- # es=1 00:14:10.882 03:27:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:10.882 03:27:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:10.882 03:27:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:10.882 03:27:48 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:10.882 03:27:48 -- common/autotest_common.sh@638 -- # local es=0 00:14:10.882 03:27:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:10.882 03:27:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:10.882 03:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:10.882 03:27:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:10.882 03:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:10.882 03:27:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:10.882 03:27:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.882 03:27:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.882 03:27:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.882 03:27:48 -- target/tls.sh@23 -- # psk= 00:14:10.882 03:27:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.882 03:27:48 -- target/tls.sh@28 -- # bdevperf_pid=251890 00:14:10.882 03:27:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.882 03:27:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.882 03:27:48 -- target/tls.sh@31 -- # waitforlisten 251890 /var/tmp/bdevperf.sock 00:14:10.882 03:27:48 -- common/autotest_common.sh@817 -- # '[' -z 251890 ']' 00:14:10.882 03:27:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.882 03:27:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.882 03:27:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.882 03:27:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.882 03:27:48 -- common/autotest_common.sh@10 -- # set +x 00:14:10.882 [2024-04-19 03:27:48.239205] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:10.882 [2024-04-19 03:27:48.239293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251890 ] 00:14:10.882 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.882 [2024-04-19 03:27:48.296145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.882 [2024-04-19 03:27:48.400547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.141 03:27:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.141 03:27:48 -- common/autotest_common.sh@850 -- # return 0 00:14:11.141 03:27:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:11.407 [2024-04-19 03:27:48.755166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:11.407 [2024-04-19 03:27:48.756892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe83bb0 (9): Bad file descriptor 00:14:11.407 [2024-04-19 03:27:48.757886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:11.407 [2024-04-19 03:27:48.757906] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:11.407 [2024-04-19 03:27:48.757919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:11.407 request: 00:14:11.407 { 00:14:11.407 "name": "TLSTEST", 00:14:11.407 "trtype": "tcp", 00:14:11.407 "traddr": "10.0.0.2", 00:14:11.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.407 "adrfam": "ipv4", 00:14:11.407 "trsvcid": "4420", 00:14:11.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.407 "method": "bdev_nvme_attach_controller", 00:14:11.407 "req_id": 1 00:14:11.407 } 00:14:11.407 Got JSON-RPC error response 00:14:11.407 response: 00:14:11.407 { 00:14:11.407 "code": -32602, 00:14:11.407 "message": "Invalid parameters" 00:14:11.407 } 00:14:11.407 03:27:48 -- target/tls.sh@36 -- # killprocess 251890 00:14:11.407 03:27:48 -- common/autotest_common.sh@936 -- # '[' -z 251890 ']' 00:14:11.407 03:27:48 -- common/autotest_common.sh@940 -- # kill -0 251890 00:14:11.407 03:27:48 -- common/autotest_common.sh@941 -- # uname 00:14:11.407 03:27:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.407 03:27:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 251890 00:14:11.407 03:27:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:11.407 03:27:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:11.407 03:27:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 251890' 00:14:11.407 killing process with pid 251890 00:14:11.407 03:27:48 -- common/autotest_common.sh@955 -- # kill 251890 00:14:11.407 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.407 00:14:11.407 Latency(us) 00:14:11.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.407 =================================================================================================================== 00:14:11.407 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.407 03:27:48 -- common/autotest_common.sh@960 -- # wait 251890 00:14:11.669 03:27:49 -- target/tls.sh@37 -- # return 1 00:14:11.669 03:27:49 -- common/autotest_common.sh@641 -- # es=1 00:14:11.669 03:27:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:11.669 03:27:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:11.669 03:27:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:11.669 03:27:49 -- target/tls.sh@158 -- # killprocess 248379 00:14:11.669 03:27:49 -- common/autotest_common.sh@936 -- # '[' -z 248379 ']' 00:14:11.669 03:27:49 -- common/autotest_common.sh@940 -- # kill -0 248379 00:14:11.669 03:27:49 -- common/autotest_common.sh@941 -- # uname 00:14:11.669 03:27:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.669 03:27:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 248379 00:14:11.669 03:27:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:11.669 03:27:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:11.669 03:27:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 248379' 00:14:11.669 killing process with pid 248379 00:14:11.669 03:27:49 -- common/autotest_common.sh@955 -- # kill 248379 00:14:11.669 [2024-04-19 03:27:49.070266] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:11.669 03:27:49 -- common/autotest_common.sh@960 -- # wait 248379 00:14:11.928 03:27:49 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:11.928 03:27:49 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:11.928 03:27:49 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:11.928 03:27:49 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:11.928 03:27:49 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:11.928 03:27:49 -- nvmf/common.sh@693 -- # digest=2 00:14:11.928 03:27:49 -- nvmf/common.sh@694 -- # python - 00:14:11.928 03:27:49 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:11.928 03:27:49 -- target/tls.sh@160 -- # mktemp 00:14:11.928 03:27:49 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.rOBhabvfA8 00:14:11.928 03:27:49 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:11.928 03:27:49 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.rOBhabvfA8 00:14:11.928 03:27:49 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:11.928 03:27:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:11.928 03:27:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:11.928 03:27:49 -- common/autotest_common.sh@10 -- # set +x 00:14:11.928 03:27:49 -- nvmf/common.sh@470 -- # nvmfpid=252048 00:14:11.928 03:27:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:11.928 03:27:49 -- nvmf/common.sh@471 -- # waitforlisten 252048 00:14:11.928 03:27:49 -- common/autotest_common.sh@817 -- # '[' -z 252048 ']' 00:14:11.929 03:27:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.929 03:27:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.929 03:27:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.929 03:27:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.929 03:27:49 -- common/autotest_common.sh@10 -- # set +x 00:14:11.929 [2024-04-19 03:27:49.448995] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:11.929 [2024-04-19 03:27:49.449093] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.929 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.187 [2024-04-19 03:27:49.517549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.187 [2024-04-19 03:27:49.631377] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.187 [2024-04-19 03:27:49.631469] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.187 [2024-04-19 03:27:49.631485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.187 [2024-04-19 03:27:49.631508] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.187 [2024-04-19 03:27:49.631520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.187 [2024-04-19 03:27:49.631559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.123 03:27:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:13.123 03:27:50 -- common/autotest_common.sh@850 -- # return 0 00:14:13.123 03:27:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:13.123 03:27:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:13.123 03:27:50 -- common/autotest_common.sh@10 -- # set +x 00:14:13.123 03:27:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.123 03:27:50 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.rOBhabvfA8 00:14:13.123 03:27:50 -- target/tls.sh@49 -- # local key=/tmp/tmp.rOBhabvfA8 00:14:13.123 03:27:50 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.123 [2024-04-19 03:27:50.663749] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.381 03:27:50 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.639 03:27:50 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:13.639 [2024-04-19 03:27:51.197251] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.639 [2024-04-19 03:27:51.197522] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.905 03:27:51 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:13.905 malloc0 00:14:14.191 03:27:51 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.191 03:27:51 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:14.449 [2024-04-19 03:27:51.943860] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.449 03:27:51 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rOBhabvfA8 00:14:14.449 03:27:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.449 03:27:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.449 03:27:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.449 03:27:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rOBhabvfA8' 00:14:14.449 03:27:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.449 03:27:51 -- target/tls.sh@28 -- # bdevperf_pid=252338 00:14:14.449 03:27:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.449 03:27:51 -- target/tls.sh@31 -- # waitforlisten 252338 /var/tmp/bdevperf.sock 00:14:14.449 03:27:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.449 03:27:51 -- common/autotest_common.sh@817 -- # '[' -z 252338 ']' 00:14:14.449 03:27:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.449 03:27:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:14.449 03:27:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.449 03:27:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:14.449 03:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:14.449 [2024-04-19 03:27:52.003812] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:14.449 [2024-04-19 03:27:52.003901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252338 ] 00:14:14.707 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.707 [2024-04-19 03:27:52.063957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.707 [2024-04-19 03:27:52.172557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.964 03:27:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:14.964 03:27:52 -- common/autotest_common.sh@850 -- # return 0 00:14:14.964 03:27:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:15.223 [2024-04-19 03:27:52.531340] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.223 [2024-04-19 03:27:52.531474] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:15.223 TLSTESTn1 00:14:15.223 03:27:52 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:15.223 Running I/O for 10 seconds... 00:14:27.431 00:14:27.431 Latency(us) 00:14:27.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.431 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:27.431 Verification LBA range: start 0x0 length 0x2000 00:14:27.431 TLSTESTn1 : 10.05 2435.76 9.51 0.00 0.00 52415.56 8835.22 74953.77 00:14:27.431 =================================================================================================================== 00:14:27.431 Total : 2435.76 9.51 0.00 0.00 52415.56 8835.22 74953.77 00:14:27.431 0 00:14:27.431 03:28:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:27.431 03:28:02 -- target/tls.sh@45 -- # killprocess 252338 00:14:27.431 03:28:02 -- common/autotest_common.sh@936 -- # '[' -z 252338 ']' 00:14:27.431 03:28:02 -- common/autotest_common.sh@940 -- # kill -0 252338 00:14:27.431 03:28:02 -- common/autotest_common.sh@941 -- # uname 00:14:27.431 03:28:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.431 03:28:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 252338 00:14:27.431 03:28:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:27.431 03:28:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:27.431 03:28:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 252338' 00:14:27.431 killing process with pid 252338 00:14:27.431 03:28:02 -- common/autotest_common.sh@955 -- # kill 252338 00:14:27.431 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.431 00:14:27.431 Latency(us) 00:14:27.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.431 =================================================================================================================== 00:14:27.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.431 [2024-04-19 03:28:02.856080] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:27.431 03:28:02 -- common/autotest_common.sh@960 -- # wait 252338 00:14:27.431 03:28:03 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.rOBhabvfA8 00:14:27.431 03:28:03 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rOBhabvfA8 00:14:27.431 03:28:03 -- common/autotest_common.sh@638 -- # local es=0 00:14:27.431 03:28:03 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rOBhabvfA8 00:14:27.431 03:28:03 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:27.431 03:28:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.431 03:28:03 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:27.431 03:28:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.431 03:28:03 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rOBhabvfA8 00:14:27.431 03:28:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:27.431 03:28:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:27.431 03:28:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:27.431 03:28:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rOBhabvfA8' 00:14:27.431 03:28:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.431 03:28:03 -- target/tls.sh@28 -- # bdevperf_pid=253654 00:14:27.431 03:28:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.431 03:28:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.431 03:28:03 -- target/tls.sh@31 -- # waitforlisten 253654 /var/tmp/bdevperf.sock 00:14:27.431 03:28:03 -- common/autotest_common.sh@817 -- # '[' -z 253654 ']' 00:14:27.431 03:28:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.431 03:28:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.431 03:28:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.431 03:28:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.431 03:28:03 -- common/autotest_common.sh@10 -- # set +x 00:14:27.431 [2024-04-19 03:28:03.172863] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:27.431 [2024-04-19 03:28:03.172952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253654 ] 00:14:27.431 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.431 [2024-04-19 03:28:03.230635] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.431 [2024-04-19 03:28:03.333136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.431 03:28:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.431 03:28:03 -- common/autotest_common.sh@850 -- # return 0 00:14:27.431 03:28:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:27.431 [2024-04-19 03:28:03.695405] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.431 [2024-04-19 03:28:03.695491] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:27.431 [2024-04-19 03:28:03.695506] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.rOBhabvfA8 00:14:27.431 request: 00:14:27.431 { 00:14:27.431 "name": "TLSTEST", 00:14:27.431 "trtype": "tcp", 00:14:27.431 "traddr": "10.0.0.2", 00:14:27.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.431 "adrfam": "ipv4", 00:14:27.431 "trsvcid": "4420", 00:14:27.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.431 "psk": "/tmp/tmp.rOBhabvfA8", 00:14:27.431 "method": "bdev_nvme_attach_controller", 00:14:27.431 "req_id": 1 00:14:27.431 } 00:14:27.431 Got JSON-RPC error response 00:14:27.431 response: 00:14:27.431 { 00:14:27.431 "code": -1, 00:14:27.431 "message": "Operation not permitted" 00:14:27.431 } 00:14:27.431 03:28:03 -- target/tls.sh@36 -- # killprocess 253654 00:14:27.431 03:28:03 -- common/autotest_common.sh@936 -- # '[' -z 253654 ']' 00:14:27.431 03:28:03 -- common/autotest_common.sh@940 -- # kill -0 253654 00:14:27.431 03:28:03 -- common/autotest_common.sh@941 -- # uname 00:14:27.431 03:28:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.431 03:28:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 253654 00:14:27.431 03:28:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:27.432 03:28:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:27.432 03:28:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 253654' 00:14:27.432 killing process with pid 253654 00:14:27.432 03:28:03 -- common/autotest_common.sh@955 -- # kill 253654 00:14:27.432 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.432 00:14:27.432 Latency(us) 00:14:27.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.432 =================================================================================================================== 00:14:27.432 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:27.432 03:28:03 -- common/autotest_common.sh@960 -- # wait 253654 00:14:27.432 03:28:03 -- target/tls.sh@37 -- # return 1 00:14:27.432 03:28:03 -- common/autotest_common.sh@641 -- # es=1 00:14:27.432 03:28:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:27.432 03:28:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:27.432 03:28:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:27.432 03:28:03 -- target/tls.sh@174 -- # killprocess 252048 00:14:27.432 03:28:03 -- common/autotest_common.sh@936 -- # '[' -z 252048 ']' 00:14:27.432 03:28:03 -- common/autotest_common.sh@940 -- # kill -0 252048 00:14:27.432 03:28:03 -- common/autotest_common.sh@941 -- # uname 00:14:27.432 03:28:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.432 03:28:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 252048 00:14:27.432 03:28:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:27.432 03:28:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:27.432 03:28:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 252048' 00:14:27.432 killing process with pid 252048 00:14:27.432 03:28:04 -- common/autotest_common.sh@955 -- # kill 252048 00:14:27.432 [2024-04-19 03:28:04.030511] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:27.432 03:28:04 -- common/autotest_common.sh@960 -- # wait 252048 00:14:27.432 03:28:04 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:27.432 03:28:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:27.432 03:28:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:27.432 03:28:04 -- common/autotest_common.sh@10 -- # set +x 00:14:27.432 03:28:04 -- nvmf/common.sh@470 -- # nvmfpid=253801 00:14:27.432 03:28:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:27.432 03:28:04 -- nvmf/common.sh@471 -- # waitforlisten 253801 00:14:27.432 03:28:04 -- common/autotest_common.sh@817 -- # '[' -z 253801 ']' 00:14:27.432 03:28:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.432 03:28:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.432 03:28:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.432 03:28:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.432 03:28:04 -- common/autotest_common.sh@10 -- # set +x 00:14:27.432 [2024-04-19 03:28:04.386641] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:27.432 [2024-04-19 03:28:04.386734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.432 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.432 [2024-04-19 03:28:04.455670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.432 [2024-04-19 03:28:04.568355] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.432 [2024-04-19 03:28:04.568441] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.432 [2024-04-19 03:28:04.568473] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.432 [2024-04-19 03:28:04.568485] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.432 [2024-04-19 03:28:04.568496] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.432 [2024-04-19 03:28:04.568529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.997 03:28:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.997 03:28:05 -- common/autotest_common.sh@850 -- # return 0 00:14:27.997 03:28:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:27.997 03:28:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.997 03:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:27.997 03:28:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.997 03:28:05 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.rOBhabvfA8 00:14:27.997 03:28:05 -- common/autotest_common.sh@638 -- # local es=0 00:14:27.997 03:28:05 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rOBhabvfA8 00:14:27.997 03:28:05 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:14:27.997 03:28:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.997 03:28:05 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:14:27.997 03:28:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.997 03:28:05 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.rOBhabvfA8 00:14:27.997 03:28:05 -- target/tls.sh@49 -- # local key=/tmp/tmp.rOBhabvfA8 00:14:27.997 03:28:05 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:28.255 [2024-04-19 03:28:05.604700] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.255 03:28:05 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:28.512 03:28:05 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:28.770 [2024-04-19 03:28:06.138148] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.770 [2024-04-19 03:28:06.138438] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.770 03:28:06 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:29.028 malloc0 00:14:29.028 03:28:06 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:29.285 03:28:06 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:29.543 [2024-04-19 03:28:06.928236] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:29.543 [2024-04-19 03:28:06.928277] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:29.543 [2024-04-19 03:28:06.928325] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:14:29.543 request: 00:14:29.543 { 00:14:29.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.543 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.543 "psk": "/tmp/tmp.rOBhabvfA8", 00:14:29.543 "method": "nvmf_subsystem_add_host", 00:14:29.543 "req_id": 1 00:14:29.543 } 00:14:29.543 Got JSON-RPC error response 00:14:29.543 response: 00:14:29.543 { 00:14:29.543 "code": -32603, 00:14:29.543 "message": "Internal error" 00:14:29.543 } 00:14:29.543 03:28:06 -- common/autotest_common.sh@641 -- # es=1 00:14:29.543 03:28:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:29.543 03:28:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:29.543 03:28:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:29.543 03:28:06 -- target/tls.sh@180 -- # killprocess 253801 00:14:29.543 03:28:06 -- common/autotest_common.sh@936 -- # '[' -z 253801 ']' 00:14:29.543 03:28:06 -- common/autotest_common.sh@940 -- # kill -0 253801 00:14:29.543 03:28:06 -- common/autotest_common.sh@941 -- # uname 00:14:29.543 03:28:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.543 03:28:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 253801 00:14:29.543 03:28:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:29.543 03:28:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:29.543 03:28:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 253801' 00:14:29.543 killing process with pid 253801 00:14:29.543 03:28:06 -- common/autotest_common.sh@955 -- # kill 253801 00:14:29.543 03:28:06 -- common/autotest_common.sh@960 -- # wait 253801 00:14:29.801 03:28:07 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.rOBhabvfA8 00:14:29.801 03:28:07 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:29.801 03:28:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:29.801 03:28:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:29.801 03:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 03:28:07 -- nvmf/common.sh@470 -- # nvmfpid=254221 00:14:29.801 03:28:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.801 03:28:07 -- nvmf/common.sh@471 -- # waitforlisten 254221 00:14:29.801 03:28:07 -- common/autotest_common.sh@817 -- # '[' -z 254221 ']' 00:14:29.801 03:28:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.801 03:28:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.801 03:28:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.801 03:28:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.801 03:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 [2024-04-19 03:28:07.314189] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:29.801 [2024-04-19 03:28:07.314283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.801 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.059 [2024-04-19 03:28:07.388185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.059 [2024-04-19 03:28:07.506170] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.059 [2024-04-19 03:28:07.506242] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.059 [2024-04-19 03:28:07.506258] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.059 [2024-04-19 03:28:07.506272] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.059 [2024-04-19 03:28:07.506284] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.059 [2024-04-19 03:28:07.506327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.316 03:28:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:30.316 03:28:07 -- common/autotest_common.sh@850 -- # return 0 00:14:30.316 03:28:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:30.316 03:28:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:30.317 03:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:30.317 03:28:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.317 03:28:07 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.rOBhabvfA8 00:14:30.317 03:28:07 -- target/tls.sh@49 -- # local key=/tmp/tmp.rOBhabvfA8 00:14:30.317 03:28:07 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:30.574 [2024-04-19 03:28:07.921291] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.574 03:28:07 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:30.831 03:28:08 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:31.089 [2024-04-19 03:28:08.418633] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.089 [2024-04-19 03:28:08.418880] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.089 03:28:08 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:31.347 malloc0 00:14:31.347 03:28:08 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:31.604 03:28:08 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:31.862 [2024-04-19 03:28:09.184614] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:31.862 03:28:09 -- target/tls.sh@188 -- # bdevperf_pid=254388 00:14:31.862 03:28:09 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:31.862 03:28:09 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:31.862 03:28:09 -- target/tls.sh@191 -- # waitforlisten 254388 /var/tmp/bdevperf.sock 00:14:31.862 03:28:09 -- common/autotest_common.sh@817 -- # '[' -z 254388 ']' 00:14:31.862 03:28:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.862 03:28:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.862 03:28:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.862 03:28:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.862 03:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:31.862 [2024-04-19 03:28:09.245340] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:31.862 [2024-04-19 03:28:09.245441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254388 ] 00:14:31.862 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.862 [2024-04-19 03:28:09.304554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.862 [2024-04-19 03:28:09.410907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.121 03:28:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:32.121 03:28:09 -- common/autotest_common.sh@850 -- # return 0 00:14:32.121 03:28:09 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:32.379 [2024-04-19 03:28:09.756152] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.379 [2024-04-19 03:28:09.756259] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:32.379 TLSTESTn1 00:14:32.379 03:28:09 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:14:32.638 03:28:10 -- target/tls.sh@196 -- # tgtconf='{ 00:14:32.638 "subsystems": [ 00:14:32.638 { 00:14:32.638 "subsystem": "keyring", 00:14:32.638 "config": [] 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "subsystem": "iobuf", 00:14:32.638 "config": [ 00:14:32.638 { 00:14:32.638 "method": "iobuf_set_options", 00:14:32.638 "params": { 00:14:32.638 "small_pool_count": 8192, 00:14:32.638 "large_pool_count": 1024, 00:14:32.638 "small_bufsize": 8192, 00:14:32.638 "large_bufsize": 135168 00:14:32.638 } 00:14:32.638 } 00:14:32.638 ] 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "subsystem": "sock", 00:14:32.638 "config": [ 00:14:32.638 { 00:14:32.638 "method": "sock_impl_set_options", 00:14:32.638 "params": { 00:14:32.638 "impl_name": "posix", 00:14:32.638 "recv_buf_size": 2097152, 00:14:32.638 "send_buf_size": 2097152, 00:14:32.638 "enable_recv_pipe": true, 00:14:32.638 "enable_quickack": false, 00:14:32.638 "enable_placement_id": 0, 00:14:32.638 "enable_zerocopy_send_server": true, 00:14:32.638 "enable_zerocopy_send_client": false, 00:14:32.638 "zerocopy_threshold": 0, 00:14:32.638 "tls_version": 0, 00:14:32.638 "enable_ktls": false 00:14:32.638 } 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "method": "sock_impl_set_options", 00:14:32.638 "params": { 00:14:32.638 "impl_name": "ssl", 00:14:32.638 "recv_buf_size": 4096, 00:14:32.638 "send_buf_size": 4096, 00:14:32.638 "enable_recv_pipe": true, 00:14:32.638 "enable_quickack": false, 00:14:32.638 "enable_placement_id": 0, 00:14:32.638 "enable_zerocopy_send_server": true, 00:14:32.638 "enable_zerocopy_send_client": false, 00:14:32.638 "zerocopy_threshold": 0, 00:14:32.638 "tls_version": 0, 00:14:32.638 "enable_ktls": false 00:14:32.638 } 00:14:32.638 } 00:14:32.638 ] 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "subsystem": "vmd", 00:14:32.638 "config": [] 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "subsystem": "accel", 00:14:32.638 "config": [ 00:14:32.638 { 00:14:32.638 "method": "accel_set_options", 00:14:32.638 "params": { 00:14:32.638 "small_cache_size": 128, 00:14:32.638 "large_cache_size": 16, 00:14:32.638 "task_count": 2048, 00:14:32.638 "sequence_count": 2048, 00:14:32.638 "buf_count": 2048 00:14:32.638 } 00:14:32.638 } 00:14:32.638 ] 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "subsystem": "bdev", 00:14:32.638 "config": [ 00:14:32.638 { 00:14:32.638 "method": "bdev_set_options", 00:14:32.638 "params": { 00:14:32.638 "bdev_io_pool_size": 65535, 00:14:32.638 "bdev_io_cache_size": 256, 00:14:32.638 "bdev_auto_examine": true, 00:14:32.638 "iobuf_small_cache_size": 128, 00:14:32.638 "iobuf_large_cache_size": 16 00:14:32.638 } 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "method": "bdev_raid_set_options", 00:14:32.638 "params": { 00:14:32.638 "process_window_size_kb": 1024 00:14:32.638 } 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "method": "bdev_iscsi_set_options", 00:14:32.638 "params": { 00:14:32.638 "timeout_sec": 30 00:14:32.638 } 00:14:32.638 }, 00:14:32.638 { 00:14:32.638 "method": "bdev_nvme_set_options", 00:14:32.638 "params": { 00:14:32.638 "action_on_timeout": "none", 00:14:32.638 "timeout_us": 0, 00:14:32.638 "timeout_admin_us": 0, 00:14:32.638 "keep_alive_timeout_ms": 10000, 00:14:32.639 "arbitration_burst": 0, 00:14:32.639 "low_priority_weight": 0, 00:14:32.639 "medium_priority_weight": 0, 00:14:32.639 "high_priority_weight": 0, 00:14:32.639 "nvme_adminq_poll_period_us": 10000, 00:14:32.639 "nvme_ioq_poll_period_us": 0, 00:14:32.639 "io_queue_requests": 0, 00:14:32.639 "delay_cmd_submit": true, 00:14:32.639 "transport_retry_count": 4, 00:14:32.639 "bdev_retry_count": 3, 00:14:32.639 "transport_ack_timeout": 0, 00:14:32.639 "ctrlr_loss_timeout_sec": 0, 00:14:32.639 "reconnect_delay_sec": 0, 00:14:32.639 "fast_io_fail_timeout_sec": 0, 00:14:32.639 "disable_auto_failback": false, 00:14:32.639 "generate_uuids": false, 00:14:32.639 "transport_tos": 0, 00:14:32.639 "nvme_error_stat": false, 00:14:32.639 "rdma_srq_size": 0, 00:14:32.639 "io_path_stat": false, 00:14:32.639 "allow_accel_sequence": false, 00:14:32.639 "rdma_max_cq_size": 0, 00:14:32.639 "rdma_cm_event_timeout_ms": 0, 00:14:32.639 "dhchap_digests": [ 00:14:32.639 "sha256", 00:14:32.639 "sha384", 00:14:32.639 "sha512" 00:14:32.639 ], 00:14:32.639 "dhchap_dhgroups": [ 00:14:32.639 "null", 00:14:32.639 "ffdhe2048", 00:14:32.639 "ffdhe3072", 00:14:32.639 "ffdhe4096", 00:14:32.639 "ffdhe6144", 00:14:32.639 "ffdhe8192" 00:14:32.639 ] 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "bdev_nvme_set_hotplug", 00:14:32.639 "params": { 00:14:32.639 "period_us": 100000, 00:14:32.639 "enable": false 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "bdev_malloc_create", 00:14:32.639 "params": { 00:14:32.639 "name": "malloc0", 00:14:32.639 "num_blocks": 8192, 00:14:32.639 "block_size": 4096, 00:14:32.639 "physical_block_size": 4096, 00:14:32.639 "uuid": "ef773851-768a-4649-8b6c-fd8236dcbcb4", 00:14:32.639 "optimal_io_boundary": 0 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "bdev_wait_for_examine" 00:14:32.639 } 00:14:32.639 ] 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "subsystem": "nbd", 00:14:32.639 "config": [] 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "subsystem": "scheduler", 00:14:32.639 "config": [ 00:14:32.639 { 00:14:32.639 "method": "framework_set_scheduler", 00:14:32.639 "params": { 00:14:32.639 "name": "static" 00:14:32.639 } 00:14:32.639 } 00:14:32.639 ] 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "subsystem": "nvmf", 00:14:32.639 "config": [ 00:14:32.639 { 00:14:32.639 "method": "nvmf_set_config", 00:14:32.639 "params": { 00:14:32.639 "discovery_filter": "match_any", 00:14:32.639 "admin_cmd_passthru": { 00:14:32.639 "identify_ctrlr": false 00:14:32.639 } 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_set_max_subsystems", 00:14:32.639 "params": { 00:14:32.639 "max_subsystems": 1024 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_set_crdt", 00:14:32.639 "params": { 00:14:32.639 "crdt1": 0, 00:14:32.639 "crdt2": 0, 00:14:32.639 "crdt3": 0 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_create_transport", 00:14:32.639 "params": { 00:14:32.639 "trtype": "TCP", 00:14:32.639 "max_queue_depth": 128, 00:14:32.639 "max_io_qpairs_per_ctrlr": 127, 00:14:32.639 "in_capsule_data_size": 4096, 00:14:32.639 "max_io_size": 131072, 00:14:32.639 "io_unit_size": 131072, 00:14:32.639 "max_aq_depth": 128, 00:14:32.639 "num_shared_buffers": 511, 00:14:32.639 "buf_cache_size": 4294967295, 00:14:32.639 "dif_insert_or_strip": false, 00:14:32.639 "zcopy": false, 00:14:32.639 "c2h_success": false, 00:14:32.639 "sock_priority": 0, 00:14:32.639 "abort_timeout_sec": 1, 00:14:32.639 "ack_timeout": 0 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_create_subsystem", 00:14:32.639 "params": { 00:14:32.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.639 "allow_any_host": false, 00:14:32.639 "serial_number": "SPDK00000000000001", 00:14:32.639 "model_number": "SPDK bdev Controller", 00:14:32.639 "max_namespaces": 10, 00:14:32.639 "min_cntlid": 1, 00:14:32.639 "max_cntlid": 65519, 00:14:32.639 "ana_reporting": false 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_subsystem_add_host", 00:14:32.639 "params": { 00:14:32.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.639 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.639 "psk": "/tmp/tmp.rOBhabvfA8" 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_subsystem_add_ns", 00:14:32.639 "params": { 00:14:32.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.639 "namespace": { 00:14:32.639 "nsid": 1, 00:14:32.639 "bdev_name": "malloc0", 00:14:32.639 "nguid": "EF773851768A46498B6CFD8236DCBCB4", 00:14:32.639 "uuid": "ef773851-768a-4649-8b6c-fd8236dcbcb4", 00:14:32.639 "no_auto_visible": false 00:14:32.639 } 00:14:32.639 } 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "method": "nvmf_subsystem_add_listener", 00:14:32.639 "params": { 00:14:32.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.639 "listen_address": { 00:14:32.639 "trtype": "TCP", 00:14:32.639 "adrfam": "IPv4", 00:14:32.639 "traddr": "10.0.0.2", 00:14:32.639 "trsvcid": "4420" 00:14:32.639 }, 00:14:32.639 "secure_channel": true 00:14:32.639 } 00:14:32.639 } 00:14:32.639 ] 00:14:32.639 } 00:14:32.639 ] 00:14:32.639 }' 00:14:32.639 03:28:10 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:32.897 03:28:10 -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:32.897 "subsystems": [ 00:14:32.897 { 00:14:32.897 "subsystem": "keyring", 00:14:32.897 "config": [] 00:14:32.897 }, 00:14:32.897 { 00:14:32.897 "subsystem": "iobuf", 00:14:32.897 "config": [ 00:14:32.897 { 00:14:32.897 "method": "iobuf_set_options", 00:14:32.897 "params": { 00:14:32.897 "small_pool_count": 8192, 00:14:32.897 "large_pool_count": 1024, 00:14:32.897 "small_bufsize": 8192, 00:14:32.897 "large_bufsize": 135168 00:14:32.897 } 00:14:32.897 } 00:14:32.897 ] 00:14:32.897 }, 00:14:32.897 { 00:14:32.897 "subsystem": "sock", 00:14:32.897 "config": [ 00:14:32.897 { 00:14:32.897 "method": "sock_impl_set_options", 00:14:32.897 "params": { 00:14:32.897 "impl_name": "posix", 00:14:32.897 "recv_buf_size": 2097152, 00:14:32.897 "send_buf_size": 2097152, 00:14:32.897 "enable_recv_pipe": true, 00:14:32.897 "enable_quickack": false, 00:14:32.897 "enable_placement_id": 0, 00:14:32.897 "enable_zerocopy_send_server": true, 00:14:32.897 "enable_zerocopy_send_client": false, 00:14:32.897 "zerocopy_threshold": 0, 00:14:32.897 "tls_version": 0, 00:14:32.897 "enable_ktls": false 00:14:32.897 } 00:14:32.897 }, 00:14:32.897 { 00:14:32.897 "method": "sock_impl_set_options", 00:14:32.897 "params": { 00:14:32.897 "impl_name": "ssl", 00:14:32.897 "recv_buf_size": 4096, 00:14:32.897 "send_buf_size": 4096, 00:14:32.897 "enable_recv_pipe": true, 00:14:32.897 "enable_quickack": false, 00:14:32.897 "enable_placement_id": 0, 00:14:32.897 "enable_zerocopy_send_server": true, 00:14:32.897 "enable_zerocopy_send_client": false, 00:14:32.897 "zerocopy_threshold": 0, 00:14:32.897 "tls_version": 0, 00:14:32.897 "enable_ktls": false 00:14:32.897 } 00:14:32.897 } 00:14:32.897 ] 00:14:32.897 }, 00:14:32.897 { 00:14:32.897 "subsystem": "vmd", 00:14:32.897 "config": [] 00:14:32.897 }, 00:14:32.898 { 00:14:32.898 "subsystem": "accel", 00:14:32.898 "config": [ 00:14:32.898 { 00:14:32.898 "method": "accel_set_options", 00:14:32.898 "params": { 00:14:32.898 "small_cache_size": 128, 00:14:32.898 "large_cache_size": 16, 00:14:32.898 "task_count": 2048, 00:14:32.898 "sequence_count": 2048, 00:14:32.898 "buf_count": 2048 00:14:32.898 } 00:14:32.898 } 00:14:32.898 ] 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "subsystem": "bdev", 00:14:32.898 "config": [ 00:14:32.898 { 00:14:32.898 "method": "bdev_set_options", 00:14:32.898 "params": { 00:14:32.898 "bdev_io_pool_size": 65535, 00:14:32.898 "bdev_io_cache_size": 256, 00:14:32.898 "bdev_auto_examine": true, 00:14:32.898 "iobuf_small_cache_size": 128, 00:14:32.898 "iobuf_large_cache_size": 16 00:14:32.898 } 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "method": "bdev_raid_set_options", 00:14:32.898 "params": { 00:14:32.898 "process_window_size_kb": 1024 00:14:32.898 } 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "method": "bdev_iscsi_set_options", 00:14:32.898 "params": { 00:14:32.898 "timeout_sec": 30 00:14:32.898 } 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "method": "bdev_nvme_set_options", 00:14:32.898 "params": { 00:14:32.898 "action_on_timeout": "none", 00:14:32.898 "timeout_us": 0, 00:14:32.898 "timeout_admin_us": 0, 00:14:32.898 "keep_alive_timeout_ms": 10000, 00:14:32.898 "arbitration_burst": 0, 00:14:32.898 "low_priority_weight": 0, 00:14:32.898 "medium_priority_weight": 0, 00:14:32.898 "high_priority_weight": 0, 00:14:32.898 "nvme_adminq_poll_period_us": 10000, 00:14:32.898 "nvme_ioq_poll_period_us": 0, 00:14:32.898 "io_queue_requests": 512, 00:14:32.898 "delay_cmd_submit": true, 00:14:32.898 "transport_retry_count": 4, 00:14:32.898 "bdev_retry_count": 3, 00:14:32.898 "transport_ack_timeout": 0, 00:14:32.898 "ctrlr_loss_timeout_sec": 0, 00:14:32.898 "reconnect_delay_sec": 0, 00:14:32.898 "fast_io_fail_timeout_sec": 0, 00:14:32.898 "disable_auto_failback": false, 00:14:32.898 "generate_uuids": false, 00:14:32.898 "transport_tos": 0, 00:14:32.898 "nvme_error_stat": false, 00:14:32.898 "rdma_srq_size": 0, 00:14:32.898 "io_path_stat": false, 00:14:32.898 "allow_accel_sequence": false, 00:14:32.898 "rdma_max_cq_size": 0, 00:14:32.898 "rdma_cm_event_timeout_ms": 0, 00:14:32.898 "dhchap_digests": [ 00:14:32.898 "sha256", 00:14:32.898 "sha384", 00:14:32.898 "sha512" 00:14:32.898 ], 00:14:32.898 "dhchap_dhgroups": [ 00:14:32.898 "null", 00:14:32.898 "ffdhe2048", 00:14:32.898 "ffdhe3072", 00:14:32.898 "ffdhe4096", 00:14:32.898 "ffdhe6144", 00:14:32.898 "ffdhe8192" 00:14:32.898 ] 00:14:32.898 } 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "method": "bdev_nvme_attach_controller", 00:14:32.898 "params": { 00:14:32.898 "name": "TLSTEST", 00:14:32.898 "trtype": "TCP", 00:14:32.898 "adrfam": "IPv4", 00:14:32.898 "traddr": "10.0.0.2", 00:14:32.898 "trsvcid": "4420", 00:14:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.898 "prchk_reftag": false, 00:14:32.898 "prchk_guard": false, 00:14:32.898 "ctrlr_loss_timeout_sec": 0, 00:14:32.898 "reconnect_delay_sec": 0, 00:14:32.898 "fast_io_fail_timeout_sec": 0, 00:14:32.898 "psk": "/tmp/tmp.rOBhabvfA8", 00:14:32.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.898 "hdgst": false, 00:14:32.898 "ddgst": false 00:14:32.898 } 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "method": "bdev_nvme_set_hotplug", 00:14:32.898 "params": { 00:14:32.898 "period_us": 100000, 00:14:32.898 "enable": false 00:14:32.898 } 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "method": "bdev_wait_for_examine" 00:14:32.898 } 00:14:32.898 ] 00:14:32.898 }, 00:14:32.898 { 00:14:32.898 "subsystem": "nbd", 00:14:32.898 "config": [] 00:14:32.898 } 00:14:32.898 ] 00:14:32.898 }' 00:14:32.898 03:28:10 -- target/tls.sh@199 -- # killprocess 254388 00:14:32.898 03:28:10 -- common/autotest_common.sh@936 -- # '[' -z 254388 ']' 00:14:32.898 03:28:10 -- common/autotest_common.sh@940 -- # kill -0 254388 00:14:32.898 03:28:10 -- common/autotest_common.sh@941 -- # uname 00:14:32.898 03:28:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:32.898 03:28:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 254388 00:14:33.155 03:28:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:33.155 03:28:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:33.155 03:28:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 254388' 00:14:33.155 killing process with pid 254388 00:14:33.155 03:28:10 -- common/autotest_common.sh@955 -- # kill 254388 00:14:33.155 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.155 00:14:33.155 Latency(us) 00:14:33.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.155 =================================================================================================================== 00:14:33.155 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.155 [2024-04-19 03:28:10.474753] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:33.155 03:28:10 -- common/autotest_common.sh@960 -- # wait 254388 00:14:33.414 03:28:10 -- target/tls.sh@200 -- # killprocess 254221 00:14:33.414 03:28:10 -- common/autotest_common.sh@936 -- # '[' -z 254221 ']' 00:14:33.414 03:28:10 -- common/autotest_common.sh@940 -- # kill -0 254221 00:14:33.414 03:28:10 -- common/autotest_common.sh@941 -- # uname 00:14:33.414 03:28:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.414 03:28:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 254221 00:14:33.414 03:28:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:33.414 03:28:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:33.414 03:28:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 254221' 00:14:33.414 killing process with pid 254221 00:14:33.414 03:28:10 -- common/autotest_common.sh@955 -- # kill 254221 00:14:33.414 [2024-04-19 03:28:10.766822] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:33.414 03:28:10 -- common/autotest_common.sh@960 -- # wait 254221 00:14:33.672 03:28:11 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:33.672 03:28:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:33.672 03:28:11 -- target/tls.sh@203 -- # echo '{ 00:14:33.672 "subsystems": [ 00:14:33.672 { 00:14:33.673 "subsystem": "keyring", 00:14:33.673 "config": [] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "iobuf", 00:14:33.673 "config": [ 00:14:33.673 { 00:14:33.673 "method": "iobuf_set_options", 00:14:33.673 "params": { 00:14:33.673 "small_pool_count": 8192, 00:14:33.673 "large_pool_count": 1024, 00:14:33.673 "small_bufsize": 8192, 00:14:33.673 "large_bufsize": 135168 00:14:33.673 } 00:14:33.673 } 00:14:33.673 ] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "sock", 00:14:33.673 "config": [ 00:14:33.673 { 00:14:33.673 "method": "sock_impl_set_options", 00:14:33.673 "params": { 00:14:33.673 "impl_name": "posix", 00:14:33.673 "recv_buf_size": 2097152, 00:14:33.673 "send_buf_size": 2097152, 00:14:33.673 "enable_recv_pipe": true, 00:14:33.673 "enable_quickack": false, 00:14:33.673 "enable_placement_id": 0, 00:14:33.673 "enable_zerocopy_send_server": true, 00:14:33.673 "enable_zerocopy_send_client": false, 00:14:33.673 "zerocopy_threshold": 0, 00:14:33.673 "tls_version": 0, 00:14:33.673 "enable_ktls": false 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "sock_impl_set_options", 00:14:33.673 "params": { 00:14:33.673 "impl_name": "ssl", 00:14:33.673 "recv_buf_size": 4096, 00:14:33.673 "send_buf_size": 4096, 00:14:33.673 "enable_recv_pipe": true, 00:14:33.673 "enable_quickack": false, 00:14:33.673 "enable_placement_id": 0, 00:14:33.673 "enable_zerocopy_send_server": true, 00:14:33.673 "enable_zerocopy_send_client": false, 00:14:33.673 "zerocopy_threshold": 0, 00:14:33.673 "tls_version": 0, 00:14:33.673 "enable_ktls": false 00:14:33.673 } 00:14:33.673 } 00:14:33.673 ] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "vmd", 00:14:33.673 "config": [] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "accel", 00:14:33.673 "config": [ 00:14:33.673 { 00:14:33.673 "method": "accel_set_options", 00:14:33.673 "params": { 00:14:33.673 "small_cache_size": 128, 00:14:33.673 "large_cache_size": 16, 00:14:33.673 "task_count": 2048, 00:14:33.673 "sequence_count": 2048, 00:14:33.673 "buf_count": 2048 00:14:33.673 } 00:14:33.673 } 00:14:33.673 ] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "bdev", 00:14:33.673 "config": [ 00:14:33.673 { 00:14:33.673 "method": "bdev_set_options", 00:14:33.673 "params": { 00:14:33.673 "bdev_io_pool_size": 65535, 00:14:33.673 "bdev_io_cache_size": 256, 00:14:33.673 "bdev_auto_examine": true, 00:14:33.673 "iobuf_small_cache_size": 128, 00:14:33.673 "iobuf_large_cache_size": 16 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "bdev_raid_set_options", 00:14:33.673 "params": { 00:14:33.673 "process_window_size_kb": 1024 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "bdev_iscsi_set_options", 00:14:33.673 "params": { 00:14:33.673 "timeout_sec": 30 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "bdev_nvme_set_options", 00:14:33.673 "params": { 00:14:33.673 "action_on_timeout": "none", 00:14:33.673 "timeout_us": 0, 00:14:33.673 "timeout_admin_us": 0, 00:14:33.673 "keep_alive_timeout_ms": 10000, 00:14:33.673 "arbitration_burst": 0, 00:14:33.673 "low_priority_weight": 0, 00:14:33.673 "medium_priority_weight": 0, 00:14:33.673 "high_priority_weight": 0, 00:14:33.673 "nvme_adminq_poll_period_us": 10000, 00:14:33.673 "nvme_ioq_poll_period_us": 0, 00:14:33.673 "io_queue_requests": 0, 00:14:33.673 "delay_cmd_submit": true, 00:14:33.673 "transport_retry_count": 4, 00:14:33.673 "bdev_retry_count": 3, 00:14:33.673 "transport_ack_timeout": 0, 00:14:33.673 "ctrlr_loss_timeout_sec": 0, 00:14:33.673 "reconnect_delay_sec": 0, 00:14:33.673 "fast_io_fail_timeout_sec": 0, 00:14:33.673 "disable_auto_failback": false, 00:14:33.673 "generate_uuids": false, 00:14:33.673 "transport_tos": 0, 00:14:33.673 "nvme_error_stat": false, 00:14:33.673 "rdma_srq_size": 0, 00:14:33.673 "io_path_stat": false, 00:14:33.673 "allow_accel_sequence": false, 00:14:33.673 "rdma_max_cq_size": 0, 00:14:33.673 "rdma_cm_event_timeout_ms": 0, 00:14:33.673 "dhchap_digests": [ 00:14:33.673 "sha256", 00:14:33.673 "sha384", 00:14:33.673 "sha512" 00:14:33.673 ], 00:14:33.673 "dhchap_dhgroups": [ 00:14:33.673 "null", 00:14:33.673 "ffdhe2048", 00:14:33.673 "ffdhe3072", 00:14:33.673 "ffdhe4096", 00:14:33.673 "ffdhe6144", 00:14:33.673 "ffdhe8192" 00:14:33.673 ] 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "bdev_nvme_set_hotplug", 00:14:33.673 "params": { 00:14:33.673 03:28:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:33.673 "period_us": 100000, 00:14:33.673 "enable": false 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "bdev_malloc_create", 00:14:33.673 "params": { 00:14:33.673 "name": "malloc0", 00:14:33.673 "num_blocks": 8192, 00:14:33.673 "block_size": 4096, 00:14:33.673 "physical_block_size": 4096, 00:14:33.673 "uuid": "ef773851-768a-4649-8b6c-fd8236dcbcb4", 00:14:33.673 "optimal_io_boundary": 0 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "bdev_wait_for_examine" 00:14:33.673 } 00:14:33.673 ] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "nbd", 00:14:33.673 "config": [] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "scheduler", 00:14:33.673 "config": [ 00:14:33.673 { 00:14:33.673 "method": "framework_set_scheduler", 00:14:33.673 "params": { 00:14:33.673 "name": "static" 00:14:33.673 } 00:14:33.673 } 00:14:33.673 ] 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "subsystem": "nvmf", 00:14:33.673 "config": [ 00:14:33.673 { 00:14:33.673 "method": "nvmf_set_config", 00:14:33.673 "params": { 00:14:33.673 "discovery_filter": "match_any", 00:14:33.673 "admin_cmd_passthru": { 00:14:33.673 "identify_ctrlr": false 00:14:33.673 } 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "nvmf_set_max_subsystems", 00:14:33.673 "params": { 00:14:33.673 "max_subsystems": 1024 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "nvmf_set_crdt", 00:14:33.673 "params": { 00:14:33.673 "crdt1": 0, 00:14:33.673 "crdt2": 0, 00:14:33.673 "crdt3": 0 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "nvmf_create_transport", 00:14:33.673 "params": { 00:14:33.673 "trtype": "TCP", 00:14:33.673 "max_queue_depth": 128, 00:14:33.673 "max_io_qpairs_per_ctrlr": 127, 00:14:33.673 "in_capsule_data_size": 4096, 00:14:33.673 "max_io_size": 131072, 00:14:33.673 "io_unit_size": 131072, 00:14:33.673 "max_aq_depth": 128, 00:14:33.673 "num_shared_buffers": 511, 00:14:33.673 "buf_cache_size": 4294967295, 00:14:33.673 "dif_insert_or_strip": false, 00:14:33.673 "zcopy": false, 00:14:33.673 "c2h_success": false, 00:14:33.673 "sock_priority": 0, 00:14:33.673 "abort_timeout_sec": 1, 00:14:33.673 "ack_timeout": 0 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "nvmf_create_subsystem", 00:14:33.673 "params": { 00:14:33.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.673 "allow_any_host": false, 00:14:33.673 "serial_number": "SPDK00000000000001", 00:14:33.673 "model_number": "SPDK bdev Controller", 00:14:33.673 "max_namespaces": 10, 00:14:33.673 "min_cntlid": 1, 00:14:33.673 "max_cntlid": 65519, 00:14:33.673 "ana_reporting": false 00:14:33.673 } 00:14:33.673 }, 00:14:33.673 { 00:14:33.673 "method": "nvmf_subsystem_add_host", 00:14:33.674 "params": { 00:14:33.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.674 "host": "nqn.2016-06.io.spdk:host1", 00:14:33.674 "psk": "/tmp/tmp.rOBhabvfA8" 00:14:33.674 } 00:14:33.674 }, 00:14:33.674 { 00:14:33.674 "method": "nvmf_subsystem_add_ns", 00:14:33.674 "params": { 00:14:33.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.674 "namespace": { 00:14:33.674 "nsid": 1, 00:14:33.674 "bdev_name": "malloc0", 00:14:33.674 "nguid": "EF773851768A46498B6CFD8236DCBCB4", 00:14:33.674 "uuid": "ef773851-768a-4649-8b6c-fd8236dcbcb4", 00:14:33.674 "no_auto_visible": false 00:14:33.674 } 00:14:33.674 } 00:14:33.674 }, 00:14:33.674 { 00:14:33.674 "method": "nvmf_subsystem_add_listener", 00:14:33.674 "params": { 00:14:33.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.674 "listen_address": { 00:14:33.674 "trtype": "TCP", 00:14:33.674 "adrfam": "IPv4", 00:14:33.674 "traddr": "10.0.0.2", 00:14:33.674 "trsvcid": "4420" 00:14:33.674 }, 00:14:33.674 "secure_channel": true 00:14:33.674 } 00:14:33.674 } 00:14:33.674 ] 00:14:33.674 } 00:14:33.674 ] 00:14:33.674 }' 00:14:33.674 03:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:33.674 03:28:11 -- nvmf/common.sh@470 -- # nvmfpid=254666 00:14:33.674 03:28:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:33.674 03:28:11 -- nvmf/common.sh@471 -- # waitforlisten 254666 00:14:33.674 03:28:11 -- common/autotest_common.sh@817 -- # '[' -z 254666 ']' 00:14:33.674 03:28:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.674 03:28:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:33.674 03:28:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.674 03:28:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:33.674 03:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:33.674 [2024-04-19 03:28:11.119660] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:33.674 [2024-04-19 03:28:11.119762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.674 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.674 [2024-04-19 03:28:11.186546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.933 [2024-04-19 03:28:11.300765] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.933 [2024-04-19 03:28:11.300828] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.933 [2024-04-19 03:28:11.300842] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.933 [2024-04-19 03:28:11.300853] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.933 [2024-04-19 03:28:11.300863] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.933 [2024-04-19 03:28:11.300963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.192 [2024-04-19 03:28:11.530348] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.192 [2024-04-19 03:28:11.546324] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:34.192 [2024-04-19 03:28:11.562374] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:34.192 [2024-04-19 03:28:11.580531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.758 03:28:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:34.759 03:28:12 -- common/autotest_common.sh@850 -- # return 0 00:14:34.759 03:28:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:34.759 03:28:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:34.759 03:28:12 -- common/autotest_common.sh@10 -- # set +x 00:14:34.759 03:28:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.759 03:28:12 -- target/tls.sh@207 -- # bdevperf_pid=254820 00:14:34.759 03:28:12 -- target/tls.sh@208 -- # waitforlisten 254820 /var/tmp/bdevperf.sock 00:14:34.759 03:28:12 -- common/autotest_common.sh@817 -- # '[' -z 254820 ']' 00:14:34.759 03:28:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.759 03:28:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:34.759 03:28:12 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:34.759 03:28:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.759 03:28:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:34.759 03:28:12 -- common/autotest_common.sh@10 -- # set +x 00:14:34.759 03:28:12 -- target/tls.sh@204 -- # echo '{ 00:14:34.759 "subsystems": [ 00:14:34.759 { 00:14:34.759 "subsystem": "keyring", 00:14:34.759 "config": [] 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "subsystem": "iobuf", 00:14:34.759 "config": [ 00:14:34.759 { 00:14:34.759 "method": "iobuf_set_options", 00:14:34.759 "params": { 00:14:34.759 "small_pool_count": 8192, 00:14:34.759 "large_pool_count": 1024, 00:14:34.759 "small_bufsize": 8192, 00:14:34.759 "large_bufsize": 135168 00:14:34.759 } 00:14:34.759 } 00:14:34.759 ] 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "subsystem": "sock", 00:14:34.759 "config": [ 00:14:34.759 { 00:14:34.759 "method": "sock_impl_set_options", 00:14:34.759 "params": { 00:14:34.759 "impl_name": "posix", 00:14:34.759 "recv_buf_size": 2097152, 00:14:34.759 "send_buf_size": 2097152, 00:14:34.759 "enable_recv_pipe": true, 00:14:34.759 "enable_quickack": false, 00:14:34.759 "enable_placement_id": 0, 00:14:34.759 "enable_zerocopy_send_server": true, 00:14:34.759 "enable_zerocopy_send_client": false, 00:14:34.759 "zerocopy_threshold": 0, 00:14:34.759 "tls_version": 0, 00:14:34.759 "enable_ktls": false 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "sock_impl_set_options", 00:14:34.759 "params": { 00:14:34.759 "impl_name": "ssl", 00:14:34.759 "recv_buf_size": 4096, 00:14:34.759 "send_buf_size": 4096, 00:14:34.759 "enable_recv_pipe": true, 00:14:34.759 "enable_quickack": false, 00:14:34.759 "enable_placement_id": 0, 00:14:34.759 "enable_zerocopy_send_server": true, 00:14:34.759 "enable_zerocopy_send_client": false, 00:14:34.759 "zerocopy_threshold": 0, 00:14:34.759 "tls_version": 0, 00:14:34.759 "enable_ktls": false 00:14:34.759 } 00:14:34.759 } 00:14:34.759 ] 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "subsystem": "vmd", 00:14:34.759 "config": [] 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "subsystem": "accel", 00:14:34.759 "config": [ 00:14:34.759 { 00:14:34.759 "method": "accel_set_options", 00:14:34.759 "params": { 00:14:34.759 "small_cache_size": 128, 00:14:34.759 "large_cache_size": 16, 00:14:34.759 "task_count": 2048, 00:14:34.759 "sequence_count": 2048, 00:14:34.759 "buf_count": 2048 00:14:34.759 } 00:14:34.759 } 00:14:34.759 ] 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "subsystem": "bdev", 00:14:34.759 "config": [ 00:14:34.759 { 00:14:34.759 "method": "bdev_set_options", 00:14:34.759 "params": { 00:14:34.759 "bdev_io_pool_size": 65535, 00:14:34.759 "bdev_io_cache_size": 256, 00:14:34.759 "bdev_auto_examine": true, 00:14:34.759 "iobuf_small_cache_size": 128, 00:14:34.759 "iobuf_large_cache_size": 16 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "bdev_raid_set_options", 00:14:34.759 "params": { 00:14:34.759 "process_window_size_kb": 1024 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "bdev_iscsi_set_options", 00:14:34.759 "params": { 00:14:34.759 "timeout_sec": 30 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "bdev_nvme_set_options", 00:14:34.759 "params": { 00:14:34.759 "action_on_timeout": "none", 00:14:34.759 "timeout_us": 0, 00:14:34.759 "timeout_admin_us": 0, 00:14:34.759 "keep_alive_timeout_ms": 10000, 00:14:34.759 "arbitration_burst": 0, 00:14:34.759 "low_priority_weight": 0, 00:14:34.759 "medium_priority_weight": 0, 00:14:34.759 "high_priority_weight": 0, 00:14:34.759 "nvme_adminq_poll_period_us": 10000, 00:14:34.759 "nvme_ioq_poll_period_us": 0, 00:14:34.759 "io_queue_requests": 512, 00:14:34.759 "delay_cmd_submit": true, 00:14:34.759 "transport_retry_count": 4, 00:14:34.759 "bdev_retry_count": 3, 00:14:34.759 "transport_ack_timeout": 0, 00:14:34.759 "ctrlr_loss_timeout_sec": 0, 00:14:34.759 "reconnect_delay_sec": 0, 00:14:34.759 "fast_io_fail_timeout_sec": 0, 00:14:34.759 "disable_auto_failback": false, 00:14:34.759 "generate_uuids": false, 00:14:34.759 "transport_tos": 0, 00:14:34.759 "nvme_error_stat": false, 00:14:34.759 "rdma_srq_size": 0, 00:14:34.759 "io_path_stat": false, 00:14:34.759 "allow_accel_sequence": false, 00:14:34.759 "rdma_max_cq_size": 0, 00:14:34.759 "rdma_cm_event_timeout_ms": 0, 00:14:34.759 "dhchap_digests": [ 00:14:34.759 "sha256", 00:14:34.759 "sha384", 00:14:34.759 "sha512" 00:14:34.759 ], 00:14:34.759 "dhchap_dhgroups": [ 00:14:34.759 "null", 00:14:34.759 "ffdhe2048", 00:14:34.759 "ffdhe3072", 00:14:34.759 "ffdhe4096", 00:14:34.759 "ffdhe6144", 00:14:34.759 "ffdhe8192" 00:14:34.759 ] 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "bdev_nvme_attach_controller", 00:14:34.759 "params": { 00:14:34.759 "name": "TLSTEST", 00:14:34.759 "trtype": "TCP", 00:14:34.759 "adrfam": "IPv4", 00:14:34.759 "traddr": "10.0.0.2", 00:14:34.759 "trsvcid": "4420", 00:14:34.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.759 "prchk_reftag": false, 00:14:34.759 "prchk_guard": false, 00:14:34.759 "ctrlr_loss_timeout_sec": 0, 00:14:34.759 "reconnect_delay_sec": 0, 00:14:34.759 "fast_io_fail_timeout_sec": 0, 00:14:34.759 "psk": "/tmp/tmp.rOBhabvfA8", 00:14:34.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.759 "hdgst": false, 00:14:34.759 "ddgst": false 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "bdev_nvme_set_hotplug", 00:14:34.759 "params": { 00:14:34.759 "period_us": 100000, 00:14:34.759 "enable": false 00:14:34.759 } 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "method": "bdev_wait_for_examine" 00:14:34.759 } 00:14:34.759 ] 00:14:34.759 }, 00:14:34.759 { 00:14:34.759 "subsystem": "nbd", 00:14:34.759 "config": [] 00:14:34.759 } 00:14:34.759 ] 00:14:34.759 }' 00:14:34.759 [2024-04-19 03:28:12.138417] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:34.759 [2024-04-19 03:28:12.138490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254820 ] 00:14:34.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.759 [2024-04-19 03:28:12.194483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.759 [2024-04-19 03:28:12.299009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.019 [2024-04-19 03:28:12.461378] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.019 [2024-04-19 03:28:12.461517] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:35.585 03:28:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:35.585 03:28:13 -- common/autotest_common.sh@850 -- # return 0 00:14:35.585 03:28:13 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:35.843 Running I/O for 10 seconds... 00:14:45.817 00:14:45.817 Latency(us) 00:14:45.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.817 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:45.817 Verification LBA range: start 0x0 length 0x2000 00:14:45.817 TLSTESTn1 : 10.05 2613.51 10.21 0.00 0.00 48838.90 6407.96 81167.55 00:14:45.817 =================================================================================================================== 00:14:45.817 Total : 2613.51 10.21 0.00 0.00 48838.90 6407.96 81167.55 00:14:45.817 0 00:14:45.817 03:28:23 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.817 03:28:23 -- target/tls.sh@214 -- # killprocess 254820 00:14:45.817 03:28:23 -- common/autotest_common.sh@936 -- # '[' -z 254820 ']' 00:14:45.817 03:28:23 -- common/autotest_common.sh@940 -- # kill -0 254820 00:14:45.817 03:28:23 -- common/autotest_common.sh@941 -- # uname 00:14:45.817 03:28:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.817 03:28:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 254820 00:14:45.817 03:28:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:45.817 03:28:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:45.817 03:28:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 254820' 00:14:45.817 killing process with pid 254820 00:14:45.817 03:28:23 -- common/autotest_common.sh@955 -- # kill 254820 00:14:45.817 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.817 00:14:45.817 Latency(us) 00:14:45.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.817 =================================================================================================================== 00:14:45.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.817 [2024-04-19 03:28:23.298952] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:45.817 03:28:23 -- common/autotest_common.sh@960 -- # wait 254820 00:14:46.101 03:28:23 -- target/tls.sh@215 -- # killprocess 254666 00:14:46.101 03:28:23 -- common/autotest_common.sh@936 -- # '[' -z 254666 ']' 00:14:46.101 03:28:23 -- common/autotest_common.sh@940 -- # kill -0 254666 00:14:46.101 03:28:23 -- common/autotest_common.sh@941 -- # uname 00:14:46.101 03:28:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.101 03:28:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 254666 00:14:46.101 03:28:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:46.101 03:28:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:46.101 03:28:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 254666' 00:14:46.101 killing process with pid 254666 00:14:46.101 03:28:23 -- common/autotest_common.sh@955 -- # kill 254666 00:14:46.101 [2024-04-19 03:28:23.589111] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:46.101 03:28:23 -- common/autotest_common.sh@960 -- # wait 254666 00:14:46.364 03:28:23 -- target/tls.sh@218 -- # nvmfappstart 00:14:46.364 03:28:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:46.364 03:28:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:46.364 03:28:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.364 03:28:23 -- nvmf/common.sh@470 -- # nvmfpid=256160 00:14:46.364 03:28:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:46.364 03:28:23 -- nvmf/common.sh@471 -- # waitforlisten 256160 00:14:46.364 03:28:23 -- common/autotest_common.sh@817 -- # '[' -z 256160 ']' 00:14:46.364 03:28:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.364 03:28:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.364 03:28:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.364 03:28:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.364 03:28:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.622 [2024-04-19 03:28:23.937004] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:46.622 [2024-04-19 03:28:23.937096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.622 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.622 [2024-04-19 03:28:23.999893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.622 [2024-04-19 03:28:24.104255] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.622 [2024-04-19 03:28:24.104309] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.622 [2024-04-19 03:28:24.104339] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.622 [2024-04-19 03:28:24.104350] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.622 [2024-04-19 03:28:24.104360] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.622 [2024-04-19 03:28:24.104419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.879 03:28:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.879 03:28:24 -- common/autotest_common.sh@850 -- # return 0 00:14:46.879 03:28:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:46.879 03:28:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:46.879 03:28:24 -- common/autotest_common.sh@10 -- # set +x 00:14:46.879 03:28:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.879 03:28:24 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.rOBhabvfA8 00:14:46.879 03:28:24 -- target/tls.sh@49 -- # local key=/tmp/tmp.rOBhabvfA8 00:14:46.879 03:28:24 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:47.137 [2024-04-19 03:28:24.486167] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.137 03:28:24 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:47.394 03:28:24 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:47.651 [2024-04-19 03:28:24.995508] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.651 [2024-04-19 03:28:24.995763] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.651 03:28:25 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:47.910 malloc0 00:14:47.910 03:28:25 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:48.167 03:28:25 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rOBhabvfA8 00:14:48.426 [2024-04-19 03:28:25.745519] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:48.426 03:28:25 -- target/tls.sh@222 -- # bdevperf_pid=256436 00:14:48.426 03:28:25 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:48.426 03:28:25 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.426 03:28:25 -- target/tls.sh@225 -- # waitforlisten 256436 /var/tmp/bdevperf.sock 00:14:48.426 03:28:25 -- common/autotest_common.sh@817 -- # '[' -z 256436 ']' 00:14:48.426 03:28:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.426 03:28:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.426 03:28:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.426 03:28:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.426 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.426 [2024-04-19 03:28:25.810517] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:48.426 [2024-04-19 03:28:25.810595] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256436 ] 00:14:48.426 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.426 [2024-04-19 03:28:25.877764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.684 [2024-04-19 03:28:26.001039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.684 03:28:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.684 03:28:26 -- common/autotest_common.sh@850 -- # return 0 00:14:48.684 03:28:26 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rOBhabvfA8 00:14:48.942 03:28:26 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:49.199 [2024-04-19 03:28:26.589791] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.199 nvme0n1 00:14:49.199 03:28:26 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.457 Running I/O for 1 seconds... 00:14:50.392 00:14:50.392 Latency(us) 00:14:50.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.392 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.392 Verification LBA range: start 0x0 length 0x2000 00:14:50.392 nvme0n1 : 1.05 2522.42 9.85 0.00 0.00 49724.20 11747.93 76118.85 00:14:50.392 =================================================================================================================== 00:14:50.392 Total : 2522.42 9.85 0.00 0.00 49724.20 11747.93 76118.85 00:14:50.392 0 00:14:50.392 03:28:27 -- target/tls.sh@234 -- # killprocess 256436 00:14:50.392 03:28:27 -- common/autotest_common.sh@936 -- # '[' -z 256436 ']' 00:14:50.392 03:28:27 -- common/autotest_common.sh@940 -- # kill -0 256436 00:14:50.392 03:28:27 -- common/autotest_common.sh@941 -- # uname 00:14:50.392 03:28:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.392 03:28:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 256436 00:14:50.392 03:28:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:50.392 03:28:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:50.392 03:28:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 256436' 00:14:50.392 killing process with pid 256436 00:14:50.392 03:28:27 -- common/autotest_common.sh@955 -- # kill 256436 00:14:50.392 Received shutdown signal, test time was about 1.000000 seconds 00:14:50.392 00:14:50.392 Latency(us) 00:14:50.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.392 =================================================================================================================== 00:14:50.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.392 03:28:27 -- common/autotest_common.sh@960 -- # wait 256436 00:14:50.650 03:28:28 -- target/tls.sh@235 -- # killprocess 256160 00:14:50.650 03:28:28 -- common/autotest_common.sh@936 -- # '[' -z 256160 ']' 00:14:50.650 03:28:28 -- common/autotest_common.sh@940 -- # kill -0 256160 00:14:50.650 03:28:28 -- common/autotest_common.sh@941 -- # uname 00:14:50.650 03:28:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.650 03:28:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 256160 00:14:50.650 03:28:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.650 03:28:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.650 03:28:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 256160' 00:14:50.650 killing process with pid 256160 00:14:50.650 03:28:28 -- common/autotest_common.sh@955 -- # kill 256160 00:14:50.650 [2024-04-19 03:28:28.176060] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:50.650 03:28:28 -- common/autotest_common.sh@960 -- # wait 256160 00:14:51.217 03:28:28 -- target/tls.sh@238 -- # nvmfappstart 00:14:51.217 03:28:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:51.217 03:28:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:51.217 03:28:28 -- common/autotest_common.sh@10 -- # set +x 00:14:51.217 03:28:28 -- nvmf/common.sh@470 -- # nvmfpid=256726 00:14:51.217 03:28:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:51.217 03:28:28 -- nvmf/common.sh@471 -- # waitforlisten 256726 00:14:51.217 03:28:28 -- common/autotest_common.sh@817 -- # '[' -z 256726 ']' 00:14:51.217 03:28:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.217 03:28:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:51.217 03:28:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.217 03:28:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:51.217 03:28:28 -- common/autotest_common.sh@10 -- # set +x 00:14:51.217 [2024-04-19 03:28:28.526510] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:51.217 [2024-04-19 03:28:28.526587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.217 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.217 [2024-04-19 03:28:28.591155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.217 [2024-04-19 03:28:28.710346] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.217 [2024-04-19 03:28:28.710434] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.217 [2024-04-19 03:28:28.710451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.217 [2024-04-19 03:28:28.710463] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.217 [2024-04-19 03:28:28.710474] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.217 [2024-04-19 03:28:28.710501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.151 03:28:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.151 03:28:29 -- common/autotest_common.sh@850 -- # return 0 00:14:52.151 03:28:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:52.151 03:28:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:52.151 03:28:29 -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 03:28:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.151 03:28:29 -- target/tls.sh@239 -- # rpc_cmd 00:14:52.151 03:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.151 03:28:29 -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 [2024-04-19 03:28:29.499234] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.151 malloc0 00:14:52.151 [2024-04-19 03:28:29.532082] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:52.151 [2024-04-19 03:28:29.532365] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.151 03:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.151 03:28:29 -- target/tls.sh@252 -- # bdevperf_pid=256874 00:14:52.151 03:28:29 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:52.151 03:28:29 -- target/tls.sh@254 -- # waitforlisten 256874 /var/tmp/bdevperf.sock 00:14:52.151 03:28:29 -- common/autotest_common.sh@817 -- # '[' -z 256874 ']' 00:14:52.151 03:28:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.151 03:28:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.151 03:28:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.151 03:28:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.151 03:28:29 -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 [2024-04-19 03:28:29.602509] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:52.151 [2024-04-19 03:28:29.602572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256874 ] 00:14:52.151 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.151 [2024-04-19 03:28:29.662933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.409 [2024-04-19 03:28:29.780419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.409 03:28:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.409 03:28:29 -- common/autotest_common.sh@850 -- # return 0 00:14:52.409 03:28:29 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rOBhabvfA8 00:14:52.667 03:28:30 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:52.924 [2024-04-19 03:28:30.424933] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.181 nvme0n1 00:14:53.181 03:28:30 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.181 Running I/O for 1 seconds... 00:14:54.553 00:14:54.553 Latency(us) 00:14:54.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.553 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.553 Verification LBA range: start 0x0 length 0x2000 00:14:54.553 nvme0n1 : 1.05 2454.71 9.59 0.00 0.00 51087.74 7330.32 88546.42 00:14:54.553 =================================================================================================================== 00:14:54.553 Total : 2454.71 9.59 0.00 0.00 51087.74 7330.32 88546.42 00:14:54.553 0 00:14:54.553 03:28:31 -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:54.553 03:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.553 03:28:31 -- common/autotest_common.sh@10 -- # set +x 00:14:54.553 03:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.554 03:28:31 -- target/tls.sh@263 -- # tgtcfg='{ 00:14:54.554 "subsystems": [ 00:14:54.554 { 00:14:54.554 "subsystem": "keyring", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "keyring_file_add_key", 00:14:54.554 "params": { 00:14:54.554 "name": "key0", 00:14:54.554 "path": "/tmp/tmp.rOBhabvfA8" 00:14:54.554 } 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "iobuf", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "iobuf_set_options", 00:14:54.554 "params": { 00:14:54.554 "small_pool_count": 8192, 00:14:54.554 "large_pool_count": 1024, 00:14:54.554 "small_bufsize": 8192, 00:14:54.554 "large_bufsize": 135168 00:14:54.554 } 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "sock", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "sock_impl_set_options", 00:14:54.554 "params": { 00:14:54.554 "impl_name": "posix", 00:14:54.554 "recv_buf_size": 2097152, 00:14:54.554 "send_buf_size": 2097152, 00:14:54.554 "enable_recv_pipe": true, 00:14:54.554 "enable_quickack": false, 00:14:54.554 "enable_placement_id": 0, 00:14:54.554 "enable_zerocopy_send_server": true, 00:14:54.554 "enable_zerocopy_send_client": false, 00:14:54.554 "zerocopy_threshold": 0, 00:14:54.554 "tls_version": 0, 00:14:54.554 "enable_ktls": false 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "sock_impl_set_options", 00:14:54.554 "params": { 00:14:54.554 "impl_name": "ssl", 00:14:54.554 "recv_buf_size": 4096, 00:14:54.554 "send_buf_size": 4096, 00:14:54.554 "enable_recv_pipe": true, 00:14:54.554 "enable_quickack": false, 00:14:54.554 "enable_placement_id": 0, 00:14:54.554 "enable_zerocopy_send_server": true, 00:14:54.554 "enable_zerocopy_send_client": false, 00:14:54.554 "zerocopy_threshold": 0, 00:14:54.554 "tls_version": 0, 00:14:54.554 "enable_ktls": false 00:14:54.554 } 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "vmd", 00:14:54.554 "config": [] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "accel", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "accel_set_options", 00:14:54.554 "params": { 00:14:54.554 "small_cache_size": 128, 00:14:54.554 "large_cache_size": 16, 00:14:54.554 "task_count": 2048, 00:14:54.554 "sequence_count": 2048, 00:14:54.554 "buf_count": 2048 00:14:54.554 } 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "bdev", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "bdev_set_options", 00:14:54.554 "params": { 00:14:54.554 "bdev_io_pool_size": 65535, 00:14:54.554 "bdev_io_cache_size": 256, 00:14:54.554 "bdev_auto_examine": true, 00:14:54.554 "iobuf_small_cache_size": 128, 00:14:54.554 "iobuf_large_cache_size": 16 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "bdev_raid_set_options", 00:14:54.554 "params": { 00:14:54.554 "process_window_size_kb": 1024 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "bdev_iscsi_set_options", 00:14:54.554 "params": { 00:14:54.554 "timeout_sec": 30 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "bdev_nvme_set_options", 00:14:54.554 "params": { 00:14:54.554 "action_on_timeout": "none", 00:14:54.554 "timeout_us": 0, 00:14:54.554 "timeout_admin_us": 0, 00:14:54.554 "keep_alive_timeout_ms": 10000, 00:14:54.554 "arbitration_burst": 0, 00:14:54.554 "low_priority_weight": 0, 00:14:54.554 "medium_priority_weight": 0, 00:14:54.554 "high_priority_weight": 0, 00:14:54.554 "nvme_adminq_poll_period_us": 10000, 00:14:54.554 "nvme_ioq_poll_period_us": 0, 00:14:54.554 "io_queue_requests": 0, 00:14:54.554 "delay_cmd_submit": true, 00:14:54.554 "transport_retry_count": 4, 00:14:54.554 "bdev_retry_count": 3, 00:14:54.554 "transport_ack_timeout": 0, 00:14:54.554 "ctrlr_loss_timeout_sec": 0, 00:14:54.554 "reconnect_delay_sec": 0, 00:14:54.554 "fast_io_fail_timeout_sec": 0, 00:14:54.554 "disable_auto_failback": false, 00:14:54.554 "generate_uuids": false, 00:14:54.554 "transport_tos": 0, 00:14:54.554 "nvme_error_stat": false, 00:14:54.554 "rdma_srq_size": 0, 00:14:54.554 "io_path_stat": false, 00:14:54.554 "allow_accel_sequence": false, 00:14:54.554 "rdma_max_cq_size": 0, 00:14:54.554 "rdma_cm_event_timeout_ms": 0, 00:14:54.554 "dhchap_digests": [ 00:14:54.554 "sha256", 00:14:54.554 "sha384", 00:14:54.554 "sha512" 00:14:54.554 ], 00:14:54.554 "dhchap_dhgroups": [ 00:14:54.554 "null", 00:14:54.554 "ffdhe2048", 00:14:54.554 "ffdhe3072", 00:14:54.554 "ffdhe4096", 00:14:54.554 "ffdhe6144", 00:14:54.554 "ffdhe8192" 00:14:54.554 ] 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "bdev_nvme_set_hotplug", 00:14:54.554 "params": { 00:14:54.554 "period_us": 100000, 00:14:54.554 "enable": false 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "bdev_malloc_create", 00:14:54.554 "params": { 00:14:54.554 "name": "malloc0", 00:14:54.554 "num_blocks": 8192, 00:14:54.554 "block_size": 4096, 00:14:54.554 "physical_block_size": 4096, 00:14:54.554 "uuid": "87f5d703-f26b-4e07-bbaf-61838fa0ac00", 00:14:54.554 "optimal_io_boundary": 0 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "bdev_wait_for_examine" 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "nbd", 00:14:54.554 "config": [] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "scheduler", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "framework_set_scheduler", 00:14:54.554 "params": { 00:14:54.554 "name": "static" 00:14:54.554 } 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "subsystem": "nvmf", 00:14:54.554 "config": [ 00:14:54.554 { 00:14:54.554 "method": "nvmf_set_config", 00:14:54.554 "params": { 00:14:54.554 "discovery_filter": "match_any", 00:14:54.554 "admin_cmd_passthru": { 00:14:54.554 "identify_ctrlr": false 00:14:54.554 } 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_set_max_subsystems", 00:14:54.554 "params": { 00:14:54.554 "max_subsystems": 1024 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_set_crdt", 00:14:54.554 "params": { 00:14:54.554 "crdt1": 0, 00:14:54.554 "crdt2": 0, 00:14:54.554 "crdt3": 0 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_create_transport", 00:14:54.554 "params": { 00:14:54.554 "trtype": "TCP", 00:14:54.554 "max_queue_depth": 128, 00:14:54.554 "max_io_qpairs_per_ctrlr": 127, 00:14:54.554 "in_capsule_data_size": 4096, 00:14:54.554 "max_io_size": 131072, 00:14:54.554 "io_unit_size": 131072, 00:14:54.554 "max_aq_depth": 128, 00:14:54.554 "num_shared_buffers": 511, 00:14:54.554 "buf_cache_size": 4294967295, 00:14:54.554 "dif_insert_or_strip": false, 00:14:54.554 "zcopy": false, 00:14:54.554 "c2h_success": false, 00:14:54.554 "sock_priority": 0, 00:14:54.554 "abort_timeout_sec": 1, 00:14:54.554 "ack_timeout": 0 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_create_subsystem", 00:14:54.554 "params": { 00:14:54.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.554 "allow_any_host": false, 00:14:54.554 "serial_number": "00000000000000000000", 00:14:54.554 "model_number": "SPDK bdev Controller", 00:14:54.554 "max_namespaces": 32, 00:14:54.554 "min_cntlid": 1, 00:14:54.554 "max_cntlid": 65519, 00:14:54.554 "ana_reporting": false 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_subsystem_add_host", 00:14:54.554 "params": { 00:14:54.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.554 "host": "nqn.2016-06.io.spdk:host1", 00:14:54.554 "psk": "key0" 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_subsystem_add_ns", 00:14:54.554 "params": { 00:14:54.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.554 "namespace": { 00:14:54.554 "nsid": 1, 00:14:54.554 "bdev_name": "malloc0", 00:14:54.554 "nguid": "87F5D703F26B4E07BBAF61838FA0AC00", 00:14:54.554 "uuid": "87f5d703-f26b-4e07-bbaf-61838fa0ac00", 00:14:54.554 "no_auto_visible": false 00:14:54.554 } 00:14:54.554 } 00:14:54.554 }, 00:14:54.554 { 00:14:54.554 "method": "nvmf_subsystem_add_listener", 00:14:54.554 "params": { 00:14:54.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.554 "listen_address": { 00:14:54.554 "trtype": "TCP", 00:14:54.554 "adrfam": "IPv4", 00:14:54.554 "traddr": "10.0.0.2", 00:14:54.554 "trsvcid": "4420" 00:14:54.554 }, 00:14:54.554 "secure_channel": true 00:14:54.554 } 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 } 00:14:54.554 ] 00:14:54.554 }' 00:14:54.554 03:28:31 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:54.813 03:28:32 -- target/tls.sh@264 -- # bperfcfg='{ 00:14:54.813 "subsystems": [ 00:14:54.813 { 00:14:54.813 "subsystem": "keyring", 00:14:54.813 "config": [ 00:14:54.813 { 00:14:54.814 "method": "keyring_file_add_key", 00:14:54.814 "params": { 00:14:54.814 "name": "key0", 00:14:54.814 "path": "/tmp/tmp.rOBhabvfA8" 00:14:54.814 } 00:14:54.814 } 00:14:54.814 ] 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "subsystem": "iobuf", 00:14:54.814 "config": [ 00:14:54.814 { 00:14:54.814 "method": "iobuf_set_options", 00:14:54.814 "params": { 00:14:54.814 "small_pool_count": 8192, 00:14:54.814 "large_pool_count": 1024, 00:14:54.814 "small_bufsize": 8192, 00:14:54.814 "large_bufsize": 135168 00:14:54.814 } 00:14:54.814 } 00:14:54.814 ] 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "subsystem": "sock", 00:14:54.814 "config": [ 00:14:54.814 { 00:14:54.814 "method": "sock_impl_set_options", 00:14:54.814 "params": { 00:14:54.814 "impl_name": "posix", 00:14:54.814 "recv_buf_size": 2097152, 00:14:54.814 "send_buf_size": 2097152, 00:14:54.814 "enable_recv_pipe": true, 00:14:54.814 "enable_quickack": false, 00:14:54.814 "enable_placement_id": 0, 00:14:54.814 "enable_zerocopy_send_server": true, 00:14:54.814 "enable_zerocopy_send_client": false, 00:14:54.814 "zerocopy_threshold": 0, 00:14:54.814 "tls_version": 0, 00:14:54.814 "enable_ktls": false 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "sock_impl_set_options", 00:14:54.814 "params": { 00:14:54.814 "impl_name": "ssl", 00:14:54.814 "recv_buf_size": 4096, 00:14:54.814 "send_buf_size": 4096, 00:14:54.814 "enable_recv_pipe": true, 00:14:54.814 "enable_quickack": false, 00:14:54.814 "enable_placement_id": 0, 00:14:54.814 "enable_zerocopy_send_server": true, 00:14:54.814 "enable_zerocopy_send_client": false, 00:14:54.814 "zerocopy_threshold": 0, 00:14:54.814 "tls_version": 0, 00:14:54.814 "enable_ktls": false 00:14:54.814 } 00:14:54.814 } 00:14:54.814 ] 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "subsystem": "vmd", 00:14:54.814 "config": [] 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "subsystem": "accel", 00:14:54.814 "config": [ 00:14:54.814 { 00:14:54.814 "method": "accel_set_options", 00:14:54.814 "params": { 00:14:54.814 "small_cache_size": 128, 00:14:54.814 "large_cache_size": 16, 00:14:54.814 "task_count": 2048, 00:14:54.814 "sequence_count": 2048, 00:14:54.814 "buf_count": 2048 00:14:54.814 } 00:14:54.814 } 00:14:54.814 ] 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "subsystem": "bdev", 00:14:54.814 "config": [ 00:14:54.814 { 00:14:54.814 "method": "bdev_set_options", 00:14:54.814 "params": { 00:14:54.814 "bdev_io_pool_size": 65535, 00:14:54.814 "bdev_io_cache_size": 256, 00:14:54.814 "bdev_auto_examine": true, 00:14:54.814 "iobuf_small_cache_size": 128, 00:14:54.814 "iobuf_large_cache_size": 16 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_raid_set_options", 00:14:54.814 "params": { 00:14:54.814 "process_window_size_kb": 1024 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_iscsi_set_options", 00:14:54.814 "params": { 00:14:54.814 "timeout_sec": 30 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_nvme_set_options", 00:14:54.814 "params": { 00:14:54.814 "action_on_timeout": "none", 00:14:54.814 "timeout_us": 0, 00:14:54.814 "timeout_admin_us": 0, 00:14:54.814 "keep_alive_timeout_ms": 10000, 00:14:54.814 "arbitration_burst": 0, 00:14:54.814 "low_priority_weight": 0, 00:14:54.814 "medium_priority_weight": 0, 00:14:54.814 "high_priority_weight": 0, 00:14:54.814 "nvme_adminq_poll_period_us": 10000, 00:14:54.814 "nvme_ioq_poll_period_us": 0, 00:14:54.814 "io_queue_requests": 512, 00:14:54.814 "delay_cmd_submit": true, 00:14:54.814 "transport_retry_count": 4, 00:14:54.814 "bdev_retry_count": 3, 00:14:54.814 "transport_ack_timeout": 0, 00:14:54.814 "ctrlr_loss_timeout_sec": 0, 00:14:54.814 "reconnect_delay_sec": 0, 00:14:54.814 "fast_io_fail_timeout_sec": 0, 00:14:54.814 "disable_auto_failback": false, 00:14:54.814 "generate_uuids": false, 00:14:54.814 "transport_tos": 0, 00:14:54.814 "nvme_error_stat": false, 00:14:54.814 "rdma_srq_size": 0, 00:14:54.814 "io_path_stat": false, 00:14:54.814 "allow_accel_sequence": false, 00:14:54.814 "rdma_max_cq_size": 0, 00:14:54.814 "rdma_cm_event_timeout_ms": 0, 00:14:54.814 "dhchap_digests": [ 00:14:54.814 "sha256", 00:14:54.814 "sha384", 00:14:54.814 "sha512" 00:14:54.814 ], 00:14:54.814 "dhchap_dhgroups": [ 00:14:54.814 "null", 00:14:54.814 "ffdhe2048", 00:14:54.814 "ffdhe3072", 00:14:54.814 "ffdhe4096", 00:14:54.814 "ffdhe6144", 00:14:54.814 "ffdhe8192" 00:14:54.814 ] 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_nvme_attach_controller", 00:14:54.814 "params": { 00:14:54.814 "name": "nvme0", 00:14:54.814 "trtype": "TCP", 00:14:54.814 "adrfam": "IPv4", 00:14:54.814 "traddr": "10.0.0.2", 00:14:54.814 "trsvcid": "4420", 00:14:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.814 "prchk_reftag": false, 00:14:54.814 "prchk_guard": false, 00:14:54.814 "ctrlr_loss_timeout_sec": 0, 00:14:54.814 "reconnect_delay_sec": 0, 00:14:54.814 "fast_io_fail_timeout_sec": 0, 00:14:54.814 "psk": "key0", 00:14:54.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.814 "hdgst": false, 00:14:54.814 "ddgst": false 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_nvme_set_hotplug", 00:14:54.814 "params": { 00:14:54.814 "period_us": 100000, 00:14:54.814 "enable": false 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_enable_histogram", 00:14:54.814 "params": { 00:14:54.814 "name": "nvme0n1", 00:14:54.814 "enable": true 00:14:54.814 } 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "method": "bdev_wait_for_examine" 00:14:54.814 } 00:14:54.814 ] 00:14:54.814 }, 00:14:54.814 { 00:14:54.814 "subsystem": "nbd", 00:14:54.814 "config": [] 00:14:54.814 } 00:14:54.814 ] 00:14:54.814 }' 00:14:54.814 03:28:32 -- target/tls.sh@266 -- # killprocess 256874 00:14:54.814 03:28:32 -- common/autotest_common.sh@936 -- # '[' -z 256874 ']' 00:14:54.814 03:28:32 -- common/autotest_common.sh@940 -- # kill -0 256874 00:14:54.814 03:28:32 -- common/autotest_common.sh@941 -- # uname 00:14:54.814 03:28:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.814 03:28:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 256874 00:14:54.814 03:28:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:54.814 03:28:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:54.814 03:28:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 256874' 00:14:54.814 killing process with pid 256874 00:14:54.814 03:28:32 -- common/autotest_common.sh@955 -- # kill 256874 00:14:54.814 Received shutdown signal, test time was about 1.000000 seconds 00:14:54.814 00:14:54.814 Latency(us) 00:14:54.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.814 =================================================================================================================== 00:14:54.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.814 03:28:32 -- common/autotest_common.sh@960 -- # wait 256874 00:14:55.072 03:28:32 -- target/tls.sh@267 -- # killprocess 256726 00:14:55.072 03:28:32 -- common/autotest_common.sh@936 -- # '[' -z 256726 ']' 00:14:55.072 03:28:32 -- common/autotest_common.sh@940 -- # kill -0 256726 00:14:55.072 03:28:32 -- common/autotest_common.sh@941 -- # uname 00:14:55.072 03:28:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.072 03:28:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 256726 00:14:55.072 03:28:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:55.072 03:28:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:55.072 03:28:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 256726' 00:14:55.072 killing process with pid 256726 00:14:55.072 03:28:32 -- common/autotest_common.sh@955 -- # kill 256726 00:14:55.072 03:28:32 -- common/autotest_common.sh@960 -- # wait 256726 00:14:55.331 03:28:32 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:55.331 03:28:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:55.331 03:28:32 -- target/tls.sh@269 -- # echo '{ 00:14:55.331 "subsystems": [ 00:14:55.331 { 00:14:55.331 "subsystem": "keyring", 00:14:55.331 "config": [ 00:14:55.331 { 00:14:55.331 "method": "keyring_file_add_key", 00:14:55.331 "params": { 00:14:55.331 "name": "key0", 00:14:55.331 "path": "/tmp/tmp.rOBhabvfA8" 00:14:55.331 } 00:14:55.331 } 00:14:55.331 ] 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "subsystem": "iobuf", 00:14:55.331 "config": [ 00:14:55.331 { 00:14:55.331 "method": "iobuf_set_options", 00:14:55.331 "params": { 00:14:55.331 "small_pool_count": 8192, 00:14:55.331 "large_pool_count": 1024, 00:14:55.331 "small_bufsize": 8192, 00:14:55.331 "large_bufsize": 135168 00:14:55.331 } 00:14:55.331 } 00:14:55.331 ] 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "subsystem": "sock", 00:14:55.331 "config": [ 00:14:55.331 { 00:14:55.331 "method": "sock_impl_set_options", 00:14:55.331 "params": { 00:14:55.331 "impl_name": "posix", 00:14:55.331 "recv_buf_size": 2097152, 00:14:55.331 "send_buf_size": 2097152, 00:14:55.331 "enable_recv_pipe": true, 00:14:55.331 "enable_quickack": false, 00:14:55.331 "enable_placement_id": 0, 00:14:55.331 "enable_zerocopy_send_server": true, 00:14:55.331 "enable_zerocopy_send_client": false, 00:14:55.331 "zerocopy_threshold": 0, 00:14:55.331 "tls_version": 0, 00:14:55.331 "enable_ktls": false 00:14:55.331 } 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "method": "sock_impl_set_options", 00:14:55.331 "params": { 00:14:55.331 "impl_name": "ssl", 00:14:55.331 "recv_buf_size": 4096, 00:14:55.331 "send_buf_size": 4096, 00:14:55.331 "enable_recv_pipe": true, 00:14:55.331 "enable_quickack": false, 00:14:55.331 "enable_placement_id": 0, 00:14:55.331 "enable_zerocopy_send_server": true, 00:14:55.331 "enable_zerocopy_send_client": false, 00:14:55.331 "zerocopy_threshold": 0, 00:14:55.331 "tls_version": 0, 00:14:55.331 "enable_ktls": false 00:14:55.331 } 00:14:55.331 } 00:14:55.331 ] 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "subsystem": "vmd", 00:14:55.331 "config": [] 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "subsystem": "accel", 00:14:55.331 "config": [ 00:14:55.331 { 00:14:55.331 "method": "accel_set_options", 00:14:55.331 "params": { 00:14:55.331 "small_cache_size": 128, 00:14:55.331 "large_cache_size": 16, 00:14:55.331 "task_count": 2048, 00:14:55.331 "sequence_count": 2048, 00:14:55.331 "buf_count": 2048 00:14:55.331 } 00:14:55.331 } 00:14:55.331 ] 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "subsystem": "bdev", 00:14:55.331 "config": [ 00:14:55.331 { 00:14:55.331 "method": "bdev_set_options", 00:14:55.331 "params": { 00:14:55.331 "bdev_io_pool_size": 65535, 00:14:55.331 "bdev_io_cache_size": 256, 00:14:55.331 "bdev_auto_examine": true, 00:14:55.331 "iobuf_small_cache_size": 128, 00:14:55.331 "iobuf_large_cache_size": 16 00:14:55.331 } 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "method": "bdev_raid_set_options", 00:14:55.331 "params": { 00:14:55.331 "process_window_size_kb": 1024 00:14:55.331 } 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "method": "bdev_iscsi_set_options", 00:14:55.331 "params": { 00:14:55.331 "timeout_sec": 30 00:14:55.331 } 00:14:55.331 }, 00:14:55.331 { 00:14:55.331 "method": "bdev_nvme_set_options", 00:14:55.331 "params": { 00:14:55.331 "action_on_timeout": "none", 00:14:55.331 "timeout_us": 0, 00:14:55.331 "timeout_admin_us": 0, 00:14:55.331 "keep_alive_timeout_ms": 10000, 00:14:55.331 "arbitration_burst": 0, 00:14:55.331 "low_priority_weight": 0, 00:14:55.331 "medium_priority_weight": 0, 00:14:55.331 "high_priority_weight": 0, 00:14:55.331 "nvme_adminq_poll_period_us": 10000, 00:14:55.331 "nvme_ioq_poll_period_us": 0, 00:14:55.331 "io_queue_requests": 0, 00:14:55.331 "delay_cmd_submit": true, 00:14:55.331 "transport_retry_count": 4, 00:14:55.331 "bdev_retry_count": 3, 00:14:55.331 "transport_ack_timeout": 0, 00:14:55.331 "ctrlr_loss_timeout_sec": 0, 00:14:55.331 "reconnect_delay_sec": 0, 00:14:55.331 "fast_io_fail_timeout_sec": 0, 00:14:55.331 "disable_auto_failback": false, 00:14:55.331 "generate_uuids": false, 00:14:55.331 "transport_tos": 0, 00:14:55.331 "nvme_error_stat": false, 00:14:55.331 "rdma_srq_size": 0, 00:14:55.331 "io_path_stat": false, 00:14:55.331 "allow_accel_sequence": false, 00:14:55.331 "rdma_max_cq_size": 0, 00:14:55.331 "rdma_cm_event_timeout_ms": 0, 00:14:55.331 "dhchap_digests": [ 00:14:55.331 "sha256", 00:14:55.331 "sha384", 00:14:55.331 "sha512" 00:14:55.331 ], 00:14:55.331 "dhchap_dhgroups": [ 00:14:55.331 "null", 00:14:55.331 "ffdhe2048", 00:14:55.332 "ffdhe3072", 00:14:55.332 "ffdhe4096", 00:14:55.332 "ffdhe6144", 00:14:55.332 "ffdhe8192" 00:14:55.332 ] 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "bdev_nvme_set_hotplug", 00:14:55.332 "params": { 00:14:55.332 "period_us": 100000, 00:14:55.332 "enable": false 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "bdev_malloc_create", 00:14:55.332 "params": { 00:14:55.332 "name": "malloc0", 00:14:55.332 "num_blocks": 8192, 00:14:55.332 "block_size": 4096, 00:14:55.332 "physical_block_size": 4096, 00:14:55.332 "uuid": "87f5d703-f26b-4e07-bbaf-61838fa0ac00", 00:14:55.332 "optimal_io_boundary": 0 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "bdev_wait_for_examine" 00:14:55.332 } 00:14:55.332 ] 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "subsystem": "nbd", 00:14:55.332 "config": [] 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "subsystem": "scheduler", 00:14:55.332 "config": [ 00:14:55.332 { 00:14:55.332 "method": "framework_set_scheduler", 00:14:55.332 "params": { 00:14:55.332 "name": "static" 00:14:55.332 } 00:14:55.332 } 00:14:55.332 ] 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "subsystem": "nvmf", 00:14:55.332 "config": [ 00:14:55.332 { 00:14:55.332 "method": "nvmf_set_config", 00:14:55.332 "params": { 00:14:55.332 "discovery_filter": "match_any", 00:14:55.332 "admin_cmd_passthru": { 00:14:55.332 "identify_ctrlr": false 00:14:55.332 } 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_set_max_subsystems", 00:14:55.332 "params": { 00:14:55.332 "max_subsystems": 1024 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_set_crdt", 00:14:55.332 "params": { 00:14:55.332 "crdt1": 0, 00:14:55.332 "crdt2": 0, 00:14:55.332 "crdt3": 0 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_create_transport", 00:14:55.332 "params": { 00:14:55.332 "trtype": "TCP", 00:14:55.332 "max_queue_depth": 128, 00:14:55.332 "max_io_qpairs_per_ctrlr": 127, 00:14:55.332 "in_capsule_data_size": 4096, 00:14:55.332 "max_io_size": 131072, 00:14:55.332 "io_unit_size": 131072, 00:14:55.332 "max_aq_depth": 128, 00:14:55.332 "num_shared_buffers": 511, 00:14:55.332 "buf_cache_size": 4294967295, 00:14:55.332 "dif_insert_or_strip": false, 00:14:55.332 "zcopy": false, 00:14:55.332 "c2h_success": false, 00:14:55.332 "sock_priority": 0, 00:14:55.332 "abort_timeout_sec": 1, 00:14:55.332 "ack_timeout": 0 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_create_subsystem", 00:14:55.332 "params": { 00:14:55.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.332 "allow_any_host": false, 00:14:55.332 "serial_number": "00000000000000000000", 00:14:55.332 "model_number": "SPDK bdev Controller", 00:14:55.332 "max_namespaces": 32, 00:14:55.332 "min_cntlid": 1, 00:14:55.332 "max_cntlid": 65519, 00:14:55.332 "ana_reporting": false 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_subsystem_add_host", 00:14:55.332 "params": { 00:14:55.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.332 "host": "nqn.2016-06.io.spdk:host1", 00:14:55.332 "psk": "key0" 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_subsystem_add_ns", 00:14:55.332 "params": { 00:14:55.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.332 "namespace": { 00:14:55.332 "nsid": 1, 00:14:55.332 "bdev_name": "malloc0", 00:14:55.332 "nguid": "87F5D703F26B4E07BBAF61838FA0AC00", 00:14:55.332 "uuid": "87f5d703-f26b-4e07-bbaf-61838fa0ac00", 00:14:55.332 "no_auto_visible": false 00:14:55.332 } 00:14:55.332 } 00:14:55.332 }, 00:14:55.332 { 00:14:55.332 "method": "nvmf_subsystem_add_listener", 00:14:55.332 "params": { 00:14:55.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.332 "listen_address": { 00:14:55.332 "trtype": "TCP", 00:14:55.332 "adrfam": "IPv4", 00:14:55.332 "traddr": "10.0.0.2", 00:14:55.332 "trsvcid": "4420" 00:14:55.332 }, 00:14:55.332 "secure_channel": true 00:14:55.332 } 00:14:55.332 } 00:14:55.332 ] 00:14:55.332 } 00:14:55.332 ] 00:14:55.332 }' 00:14:55.332 03:28:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:55.332 03:28:32 -- common/autotest_common.sh@10 -- # set +x 00:14:55.332 03:28:32 -- nvmf/common.sh@470 -- # nvmfpid=257283 00:14:55.332 03:28:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:55.332 03:28:32 -- nvmf/common.sh@471 -- # waitforlisten 257283 00:14:55.332 03:28:32 -- common/autotest_common.sh@817 -- # '[' -z 257283 ']' 00:14:55.332 03:28:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.332 03:28:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:55.332 03:28:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.332 03:28:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:55.332 03:28:32 -- common/autotest_common.sh@10 -- # set +x 00:14:55.332 [2024-04-19 03:28:32.843328] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:55.332 [2024-04-19 03:28:32.843426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.332 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.591 [2024-04-19 03:28:32.911486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.591 [2024-04-19 03:28:33.033083] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.591 [2024-04-19 03:28:33.033158] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.591 [2024-04-19 03:28:33.033176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.591 [2024-04-19 03:28:33.033190] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.591 [2024-04-19 03:28:33.033202] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.591 [2024-04-19 03:28:33.033317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.848 [2024-04-19 03:28:33.264875] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.848 [2024-04-19 03:28:33.296891] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:55.848 [2024-04-19 03:28:33.314563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.413 03:28:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:56.413 03:28:33 -- common/autotest_common.sh@850 -- # return 0 00:14:56.414 03:28:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:56.414 03:28:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:56.414 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 03:28:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.414 03:28:33 -- target/tls.sh@272 -- # bdevperf_pid=257435 00:14:56.414 03:28:33 -- target/tls.sh@273 -- # waitforlisten 257435 /var/tmp/bdevperf.sock 00:14:56.414 03:28:33 -- common/autotest_common.sh@817 -- # '[' -z 257435 ']' 00:14:56.414 03:28:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.414 03:28:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.414 03:28:33 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:56.414 03:28:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.414 03:28:33 -- target/tls.sh@270 -- # echo '{ 00:14:56.414 "subsystems": [ 00:14:56.414 { 00:14:56.414 "subsystem": "keyring", 00:14:56.414 "config": [ 00:14:56.414 { 00:14:56.414 "method": "keyring_file_add_key", 00:14:56.414 "params": { 00:14:56.414 "name": "key0", 00:14:56.414 "path": "/tmp/tmp.rOBhabvfA8" 00:14:56.414 } 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "subsystem": "iobuf", 00:14:56.414 "config": [ 00:14:56.414 { 00:14:56.414 "method": "iobuf_set_options", 00:14:56.414 "params": { 00:14:56.414 "small_pool_count": 8192, 00:14:56.414 "large_pool_count": 1024, 00:14:56.414 "small_bufsize": 8192, 00:14:56.414 "large_bufsize": 135168 00:14:56.414 } 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "subsystem": "sock", 00:14:56.414 "config": [ 00:14:56.414 { 00:14:56.414 "method": "sock_impl_set_options", 00:14:56.414 "params": { 00:14:56.414 "impl_name": "posix", 00:14:56.414 "recv_buf_size": 2097152, 00:14:56.414 "send_buf_size": 2097152, 00:14:56.414 "enable_recv_pipe": true, 00:14:56.414 "enable_quickack": false, 00:14:56.414 "enable_placement_id": 0, 00:14:56.414 "enable_zerocopy_send_server": true, 00:14:56.414 "enable_zerocopy_send_client": false, 00:14:56.414 "zerocopy_threshold": 0, 00:14:56.414 "tls_version": 0, 00:14:56.414 "enable_ktls": false 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "sock_impl_set_options", 00:14:56.414 "params": { 00:14:56.414 "impl_name": "ssl", 00:14:56.414 "recv_buf_size": 4096, 00:14:56.414 "send_buf_size": 4096, 00:14:56.414 "enable_recv_pipe": true, 00:14:56.414 "enable_quickack": false, 00:14:56.414 "enable_placement_id": 0, 00:14:56.414 "enable_zerocopy_send_server": true, 00:14:56.414 "enable_zerocopy_send_client": false, 00:14:56.414 "zerocopy_threshold": 0, 00:14:56.414 "tls_version": 0, 00:14:56.414 "enable_ktls": false 00:14:56.414 } 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "subsystem": "vmd", 00:14:56.414 "config": [] 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "subsystem": "accel", 00:14:56.414 "config": [ 00:14:56.414 { 00:14:56.414 "method": "accel_set_options", 00:14:56.414 "params": { 00:14:56.414 "small_cache_size": 128, 00:14:56.414 "large_cache_size": 16, 00:14:56.414 "task_count": 2048, 00:14:56.414 "sequence_count": 2048, 00:14:56.414 "buf_count": 2048 00:14:56.414 } 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "subsystem": "bdev", 00:14:56.414 "config": [ 00:14:56.414 { 00:14:56.414 "method": "bdev_set_options", 00:14:56.414 "params": { 00:14:56.414 "bdev_io_pool_size": 65535, 00:14:56.414 "bdev_io_cache_size": 256, 00:14:56.414 "bdev_auto_examine": true, 00:14:56.414 "iobuf_small_cache_size": 128, 00:14:56.414 "iobuf_large_cache_size": 16 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_raid_set_options", 00:14:56.414 "params": { 00:14:56.414 "process_window_size_kb": 1024 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_iscsi_set_options", 00:14:56.414 "params": { 00:14:56.414 "timeout_sec": 30 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_nvme_set_options", 00:14:56.414 "params": { 00:14:56.414 "action_on_timeout": "none", 00:14:56.414 "timeout_us": 0, 00:14:56.414 "timeout_admin_us": 0, 00:14:56.414 "keep_alive_timeout_ms": 10000, 00:14:56.414 "arbitration_burst": 0, 00:14:56.414 "low_priority_weight": 0, 00:14:56.414 "medium_priority_weight": 0, 00:14:56.414 "high_priority_weight": 0, 00:14:56.414 "nvme_adminq_poll_period_us": 10000, 00:14:56.414 "nvme_ioq_poll_period_us": 0, 00:14:56.414 "io_queue_requests": 512, 00:14:56.414 "delay_cmd_submit": true, 00:14:56.414 "transport_retry_count": 4, 00:14:56.414 "bdev_retry_count": 3, 00:14:56.414 "transport_ack_timeout": 0, 00:14:56.414 "ctrlr_loss_timeout_sec": 0, 00:14:56.414 "reconnect_delay_sec": 0, 00:14:56.414 "fast_io_fail_timeout_sec": 0, 00:14:56.414 "disable_auto_failback": false, 00:14:56.414 "generate_uuids": false, 00:14:56.414 "transport_tos": 0, 00:14:56.414 "nvme_error_stat": false, 00:14:56.414 "rdma_srq_size": 0, 00:14:56.414 "io_path_stat": false, 00:14:56.414 "allow_accel_sequence": false, 00:14:56.414 "rdma_max_cq_size": 0, 00:14:56.414 "rdma_cm_event_timeout_ms": 0, 00:14:56.414 "dhchap_digests": [ 00:14:56.414 "sha256", 00:14:56.414 "sha384", 00:14:56.414 "sha512" 00:14:56.414 ], 00:14:56.414 "dhchap_dhgroups": [ 00:14:56.414 "null", 00:14:56.414 "ffdhe2048", 00:14:56.414 "ffdhe3072", 00:14:56.414 "ffdhe4096", 00:14:56.414 "ffdhe6144", 00:14:56.414 "ffdhe8192" 00:14:56.414 ] 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_nvme_attach_controller", 00:14:56.414 "params": { 00:14:56.414 "name": "nvme0", 00:14:56.414 "trtype": "TCP", 00:14:56.414 "adrfam": "IPv4", 00:14:56.414 "traddr": "10.0.0.2", 00:14:56.414 "trsvcid": "4420", 00:14:56.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.414 "prchk_reftag": false, 00:14:56.414 "prchk_guard": false, 00:14:56.414 "ctrlr_loss_timeout_sec": 0, 00:14:56.414 "reconnect_delay_sec": 0, 00:14:56.414 "fast_io_fail_timeout_sec": 0, 00:14:56.414 "psk": "key0", 00:14:56.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.414 "hdgst": false, 00:14:56.414 "ddgst": false 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_nvme_set_hotplug", 00:14:56.414 "params": { 00:14:56.414 "period_us": 100000, 00:14:56.414 "enable": false 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_enable_histogram", 00:14:56.414 "params": { 00:14:56.414 "name": "nvme0n1", 00:14:56.414 "enable": true 00:14:56.414 } 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "method": "bdev_wait_for_examine" 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "subsystem": "nbd", 00:14:56.414 "config": [] 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 }' 00:14:56.414 03:28:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.414 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 [2024-04-19 03:28:33.930179] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:14:56.414 [2024-04-19 03:28:33.930266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257435 ] 00:14:56.414 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.672 [2024-04-19 03:28:33.990553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.672 [2024-04-19 03:28:34.103578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.931 [2024-04-19 03:28:34.281246] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.496 03:28:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.496 03:28:34 -- common/autotest_common.sh@850 -- # return 0 00:14:57.496 03:28:34 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:57.496 03:28:34 -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:57.753 03:28:35 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.754 03:28:35 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:57.754 Running I/O for 1 seconds... 00:14:59.126 00:14:59.126 Latency(us) 00:14:59.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.126 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.126 Verification LBA range: start 0x0 length 0x2000 00:14:59.126 nvme0n1 : 1.05 2487.03 9.71 0.00 0.00 50443.73 6116.69 88546.42 00:14:59.126 =================================================================================================================== 00:14:59.126 Total : 2487.03 9.71 0.00 0.00 50443.73 6116.69 88546.42 00:14:59.126 0 00:14:59.126 03:28:36 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:59.126 03:28:36 -- target/tls.sh@279 -- # cleanup 00:14:59.126 03:28:36 -- target/tls.sh@15 -- # process_shm --id 0 00:14:59.126 03:28:36 -- common/autotest_common.sh@794 -- # type=--id 00:14:59.126 03:28:36 -- common/autotest_common.sh@795 -- # id=0 00:14:59.126 03:28:36 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:59.126 03:28:36 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.126 03:28:36 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:59.126 03:28:36 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:59.126 03:28:36 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:59.126 03:28:36 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.126 nvmf_trace.0 00:14:59.126 03:28:36 -- common/autotest_common.sh@809 -- # return 0 00:14:59.126 03:28:36 -- target/tls.sh@16 -- # killprocess 257435 00:14:59.126 03:28:36 -- common/autotest_common.sh@936 -- # '[' -z 257435 ']' 00:14:59.126 03:28:36 -- common/autotest_common.sh@940 -- # kill -0 257435 00:14:59.126 03:28:36 -- common/autotest_common.sh@941 -- # uname 00:14:59.126 03:28:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.126 03:28:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 257435 00:14:59.126 03:28:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:59.126 03:28:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:59.126 03:28:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 257435' 00:14:59.126 killing process with pid 257435 00:14:59.126 03:28:36 -- common/autotest_common.sh@955 -- # kill 257435 00:14:59.126 Received shutdown signal, test time was about 1.000000 seconds 00:14:59.126 00:14:59.126 Latency(us) 00:14:59.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.126 =================================================================================================================== 00:14:59.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.126 03:28:36 -- common/autotest_common.sh@960 -- # wait 257435 00:14:59.126 03:28:36 -- target/tls.sh@17 -- # nvmftestfini 00:14:59.126 03:28:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:59.126 03:28:36 -- nvmf/common.sh@117 -- # sync 00:14:59.126 03:28:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.126 03:28:36 -- nvmf/common.sh@120 -- # set +e 00:14:59.126 03:28:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.126 03:28:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.126 rmmod nvme_tcp 00:14:59.126 rmmod nvme_fabrics 00:14:59.384 rmmod nvme_keyring 00:14:59.384 03:28:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.384 03:28:36 -- nvmf/common.sh@124 -- # set -e 00:14:59.384 03:28:36 -- nvmf/common.sh@125 -- # return 0 00:14:59.384 03:28:36 -- nvmf/common.sh@478 -- # '[' -n 257283 ']' 00:14:59.384 03:28:36 -- nvmf/common.sh@479 -- # killprocess 257283 00:14:59.384 03:28:36 -- common/autotest_common.sh@936 -- # '[' -z 257283 ']' 00:14:59.384 03:28:36 -- common/autotest_common.sh@940 -- # kill -0 257283 00:14:59.384 03:28:36 -- common/autotest_common.sh@941 -- # uname 00:14:59.384 03:28:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.384 03:28:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 257283 00:14:59.384 03:28:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.384 03:28:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.384 03:28:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 257283' 00:14:59.384 killing process with pid 257283 00:14:59.384 03:28:36 -- common/autotest_common.sh@955 -- # kill 257283 00:14:59.384 03:28:36 -- common/autotest_common.sh@960 -- # wait 257283 00:14:59.643 03:28:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:59.643 03:28:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:59.643 03:28:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:59.643 03:28:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.643 03:28:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.643 03:28:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.643 03:28:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.643 03:28:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.547 03:28:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:01.547 03:28:39 -- target/tls.sh@18 -- # rm -f /tmp/tmp.2nwlvtPMTf /tmp/tmp.tGR07DitC0 /tmp/tmp.rOBhabvfA8 00:15:01.547 00:15:01.547 real 1m22.295s 00:15:01.547 user 2m10.160s 00:15:01.547 sys 0m28.519s 00:15:01.547 03:28:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:01.547 03:28:39 -- common/autotest_common.sh@10 -- # set +x 00:15:01.547 ************************************ 00:15:01.547 END TEST nvmf_tls 00:15:01.547 ************************************ 00:15:01.547 03:28:39 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:01.547 03:28:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:01.547 03:28:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:01.547 03:28:39 -- common/autotest_common.sh@10 -- # set +x 00:15:01.806 ************************************ 00:15:01.806 START TEST nvmf_fips 00:15:01.806 ************************************ 00:15:01.806 03:28:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:01.806 * Looking for test storage... 00:15:01.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:01.806 03:28:39 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.806 03:28:39 -- nvmf/common.sh@7 -- # uname -s 00:15:01.806 03:28:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.806 03:28:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.806 03:28:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.806 03:28:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.806 03:28:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.806 03:28:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.806 03:28:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.806 03:28:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.806 03:28:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.806 03:28:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.806 03:28:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.806 03:28:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.806 03:28:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.806 03:28:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.806 03:28:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.806 03:28:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.806 03:28:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.806 03:28:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.806 03:28:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.806 03:28:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.806 03:28:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.806 03:28:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.806 03:28:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.806 03:28:39 -- paths/export.sh@5 -- # export PATH 00:15:01.806 03:28:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.806 03:28:39 -- nvmf/common.sh@47 -- # : 0 00:15:01.806 03:28:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.806 03:28:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.806 03:28:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.806 03:28:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.806 03:28:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.806 03:28:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.806 03:28:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.806 03:28:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.806 03:28:39 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.806 03:28:39 -- fips/fips.sh@89 -- # check_openssl_version 00:15:01.806 03:28:39 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:01.806 03:28:39 -- fips/fips.sh@85 -- # openssl version 00:15:01.806 03:28:39 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:01.806 03:28:39 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:01.806 03:28:39 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:01.806 03:28:39 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:01.806 03:28:39 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:01.806 03:28:39 -- scripts/common.sh@333 -- # IFS=.-: 00:15:01.806 03:28:39 -- scripts/common.sh@333 -- # read -ra ver1 00:15:01.806 03:28:39 -- scripts/common.sh@334 -- # IFS=.-: 00:15:01.806 03:28:39 -- scripts/common.sh@334 -- # read -ra ver2 00:15:01.806 03:28:39 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:01.806 03:28:39 -- scripts/common.sh@337 -- # ver1_l=3 00:15:01.806 03:28:39 -- scripts/common.sh@338 -- # ver2_l=3 00:15:01.806 03:28:39 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:01.806 03:28:39 -- scripts/common.sh@341 -- # case "$op" in 00:15:01.806 03:28:39 -- scripts/common.sh@345 -- # : 1 00:15:01.806 03:28:39 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:01.806 03:28:39 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.806 03:28:39 -- scripts/common.sh@362 -- # decimal 3 00:15:01.806 03:28:39 -- scripts/common.sh@350 -- # local d=3 00:15:01.806 03:28:39 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:01.806 03:28:39 -- scripts/common.sh@352 -- # echo 3 00:15:01.806 03:28:39 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:01.806 03:28:39 -- scripts/common.sh@363 -- # decimal 3 00:15:01.806 03:28:39 -- scripts/common.sh@350 -- # local d=3 00:15:01.806 03:28:39 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:01.806 03:28:39 -- scripts/common.sh@352 -- # echo 3 00:15:01.806 03:28:39 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:01.806 03:28:39 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:01.806 03:28:39 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:01.806 03:28:39 -- scripts/common.sh@361 -- # (( v++ )) 00:15:01.806 03:28:39 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.806 03:28:39 -- scripts/common.sh@362 -- # decimal 0 00:15:01.806 03:28:39 -- scripts/common.sh@350 -- # local d=0 00:15:01.806 03:28:39 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:01.806 03:28:39 -- scripts/common.sh@352 -- # echo 0 00:15:01.806 03:28:39 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:01.806 03:28:39 -- scripts/common.sh@363 -- # decimal 0 00:15:01.806 03:28:39 -- scripts/common.sh@350 -- # local d=0 00:15:01.806 03:28:39 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:01.806 03:28:39 -- scripts/common.sh@352 -- # echo 0 00:15:01.806 03:28:39 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:01.806 03:28:39 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:01.806 03:28:39 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:01.806 03:28:39 -- scripts/common.sh@361 -- # (( v++ )) 00:15:01.806 03:28:39 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.806 03:28:39 -- scripts/common.sh@362 -- # decimal 9 00:15:01.806 03:28:39 -- scripts/common.sh@350 -- # local d=9 00:15:01.806 03:28:39 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:01.806 03:28:39 -- scripts/common.sh@352 -- # echo 9 00:15:01.806 03:28:39 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:01.806 03:28:39 -- scripts/common.sh@363 -- # decimal 0 00:15:01.806 03:28:39 -- scripts/common.sh@350 -- # local d=0 00:15:01.806 03:28:39 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:01.806 03:28:39 -- scripts/common.sh@352 -- # echo 0 00:15:01.806 03:28:39 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:01.806 03:28:39 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:01.806 03:28:39 -- scripts/common.sh@364 -- # return 0 00:15:01.806 03:28:39 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:01.806 03:28:39 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:01.806 03:28:39 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:01.806 03:28:39 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:01.806 03:28:39 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:01.806 03:28:39 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:01.806 03:28:39 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:01.806 03:28:39 -- fips/fips.sh@113 -- # build_openssl_config 00:15:01.806 03:28:39 -- fips/fips.sh@37 -- # cat 00:15:01.806 03:28:39 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:01.806 03:28:39 -- fips/fips.sh@58 -- # cat - 00:15:01.806 03:28:39 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:01.806 03:28:39 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:01.806 03:28:39 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:01.806 03:28:39 -- fips/fips.sh@116 -- # openssl list -providers 00:15:01.806 03:28:39 -- fips/fips.sh@116 -- # grep name 00:15:01.806 03:28:39 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:01.806 03:28:39 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:01.806 03:28:39 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:01.806 03:28:39 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:01.807 03:28:39 -- fips/fips.sh@127 -- # : 00:15:01.807 03:28:39 -- common/autotest_common.sh@638 -- # local es=0 00:15:01.807 03:28:39 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:01.807 03:28:39 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:01.807 03:28:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:01.807 03:28:39 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:01.807 03:28:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:01.807 03:28:39 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:01.807 03:28:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:01.807 03:28:39 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:01.807 03:28:39 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:01.807 03:28:39 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:02.065 Error setting digest 00:15:02.065 0032E4BB0B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:02.065 0032E4BB0B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:02.065 03:28:39 -- common/autotest_common.sh@641 -- # es=1 00:15:02.065 03:28:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.065 03:28:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.065 03:28:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.065 03:28:39 -- fips/fips.sh@130 -- # nvmftestinit 00:15:02.065 03:28:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:02.065 03:28:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.065 03:28:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:02.065 03:28:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:02.065 03:28:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:02.065 03:28:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.065 03:28:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.065 03:28:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.065 03:28:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:02.065 03:28:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:02.065 03:28:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.065 03:28:39 -- common/autotest_common.sh@10 -- # set +x 00:15:03.966 03:28:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:03.966 03:28:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.966 03:28:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.966 03:28:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.966 03:28:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.966 03:28:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.966 03:28:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.966 03:28:41 -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.966 03:28:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.966 03:28:41 -- nvmf/common.sh@296 -- # e810=() 00:15:03.966 03:28:41 -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.966 03:28:41 -- nvmf/common.sh@297 -- # x722=() 00:15:03.966 03:28:41 -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.966 03:28:41 -- nvmf/common.sh@298 -- # mlx=() 00:15:03.966 03:28:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.966 03:28:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.966 03:28:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.966 03:28:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.966 03:28:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.966 03:28:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.966 03:28:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:03.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:03.966 03:28:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.966 03:28:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:03.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:03.966 03:28:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.966 03:28:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.966 03:28:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.966 03:28:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:03.966 03:28:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.966 03:28:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:03.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:03.966 03:28:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.966 03:28:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.966 03:28:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.966 03:28:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:03.966 03:28:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.966 03:28:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:03.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:03.966 03:28:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.966 03:28:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:03.966 03:28:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:03.966 03:28:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:03.966 03:28:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.966 03:28:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.966 03:28:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.966 03:28:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.966 03:28:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.966 03:28:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.966 03:28:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.966 03:28:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.966 03:28:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.966 03:28:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.966 03:28:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.966 03:28:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.966 03:28:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.966 03:28:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.966 03:28:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.966 03:28:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.966 03:28:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.966 03:28:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.966 03:28:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.966 03:28:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:15:03.966 00:15:03.966 --- 10.0.0.2 ping statistics --- 00:15:03.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.966 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:15:03.966 03:28:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:15:03.966 00:15:03.966 --- 10.0.0.1 ping statistics --- 00:15:03.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.966 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:15:03.966 03:28:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.966 03:28:41 -- nvmf/common.sh@411 -- # return 0 00:15:03.966 03:28:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:03.966 03:28:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.966 03:28:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:03.966 03:28:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.966 03:28:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:03.966 03:28:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:03.966 03:28:41 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:03.966 03:28:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:03.966 03:28:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:03.966 03:28:41 -- common/autotest_common.sh@10 -- # set +x 00:15:03.966 03:28:41 -- nvmf/common.sh@470 -- # nvmfpid=259809 00:15:03.966 03:28:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:03.966 03:28:41 -- nvmf/common.sh@471 -- # waitforlisten 259809 00:15:03.966 03:28:41 -- common/autotest_common.sh@817 -- # '[' -z 259809 ']' 00:15:03.966 03:28:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.966 03:28:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:03.966 03:28:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.966 03:28:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:03.966 03:28:41 -- common/autotest_common.sh@10 -- # set +x 00:15:03.966 [2024-04-19 03:28:41.494416] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:15:03.966 [2024-04-19 03:28:41.494494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.259 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.259 [2024-04-19 03:28:41.563976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.259 [2024-04-19 03:28:41.680869] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.259 [2024-04-19 03:28:41.680937] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.259 [2024-04-19 03:28:41.680954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.259 [2024-04-19 03:28:41.680967] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.259 [2024-04-19 03:28:41.680979] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.259 [2024-04-19 03:28:41.681019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.194 03:28:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.194 03:28:42 -- common/autotest_common.sh@850 -- # return 0 00:15:05.194 03:28:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:05.194 03:28:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:05.194 03:28:42 -- common/autotest_common.sh@10 -- # set +x 00:15:05.194 03:28:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.194 03:28:42 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:05.194 03:28:42 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:05.194 03:28:42 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:05.194 03:28:42 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:05.194 03:28:42 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:05.194 03:28:42 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:05.194 03:28:42 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:05.194 03:28:42 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.194 [2024-04-19 03:28:42.725769] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.194 [2024-04-19 03:28:42.741766] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.194 [2024-04-19 03:28:42.741967] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.453 [2024-04-19 03:28:42.773221] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:05.453 malloc0 00:15:05.453 03:28:42 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:05.453 03:28:42 -- fips/fips.sh@147 -- # bdevperf_pid=259965 00:15:05.453 03:28:42 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:05.453 03:28:42 -- fips/fips.sh@148 -- # waitforlisten 259965 /var/tmp/bdevperf.sock 00:15:05.453 03:28:42 -- common/autotest_common.sh@817 -- # '[' -z 259965 ']' 00:15:05.453 03:28:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.453 03:28:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.453 03:28:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.453 03:28:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.453 03:28:42 -- common/autotest_common.sh@10 -- # set +x 00:15:05.453 [2024-04-19 03:28:42.855696] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:15:05.453 [2024-04-19 03:28:42.855769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259965 ] 00:15:05.453 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.453 [2024-04-19 03:28:42.913085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.711 [2024-04-19 03:28:43.022388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.711 03:28:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.711 03:28:43 -- common/autotest_common.sh@850 -- # return 0 00:15:05.711 03:28:43 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:05.969 [2024-04-19 03:28:43.338825] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.969 [2024-04-19 03:28:43.338964] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:05.969 TLSTESTn1 00:15:05.969 03:28:43 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:05.969 Running I/O for 10 seconds... 00:15:18.175 00:15:18.175 Latency(us) 00:15:18.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.175 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:18.175 Verification LBA range: start 0x0 length 0x2000 00:15:18.175 TLSTESTn1 : 10.05 2606.02 10.18 0.00 0.00 48987.23 6553.60 74565.40 00:15:18.175 =================================================================================================================== 00:15:18.175 Total : 2606.02 10.18 0.00 0.00 48987.23 6553.60 74565.40 00:15:18.175 0 00:15:18.175 03:28:53 -- fips/fips.sh@1 -- # cleanup 00:15:18.175 03:28:53 -- fips/fips.sh@15 -- # process_shm --id 0 00:15:18.175 03:28:53 -- common/autotest_common.sh@794 -- # type=--id 00:15:18.175 03:28:53 -- common/autotest_common.sh@795 -- # id=0 00:15:18.175 03:28:53 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:18.175 03:28:53 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:18.175 03:28:53 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:18.175 03:28:53 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:18.175 03:28:53 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:18.175 03:28:53 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:18.175 nvmf_trace.0 00:15:18.175 03:28:53 -- common/autotest_common.sh@809 -- # return 0 00:15:18.175 03:28:53 -- fips/fips.sh@16 -- # killprocess 259965 00:15:18.175 03:28:53 -- common/autotest_common.sh@936 -- # '[' -z 259965 ']' 00:15:18.175 03:28:53 -- common/autotest_common.sh@940 -- # kill -0 259965 00:15:18.175 03:28:53 -- common/autotest_common.sh@941 -- # uname 00:15:18.175 03:28:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.175 03:28:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 259965 00:15:18.175 03:28:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:18.175 03:28:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:18.175 03:28:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 259965' 00:15:18.175 killing process with pid 259965 00:15:18.175 03:28:53 -- common/autotest_common.sh@955 -- # kill 259965 00:15:18.175 Received shutdown signal, test time was about 10.000000 seconds 00:15:18.175 00:15:18.175 Latency(us) 00:15:18.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.175 =================================================================================================================== 00:15:18.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:18.175 [2024-04-19 03:28:53.696163] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:18.175 03:28:53 -- common/autotest_common.sh@960 -- # wait 259965 00:15:18.175 03:28:53 -- fips/fips.sh@17 -- # nvmftestfini 00:15:18.175 03:28:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:18.175 03:28:53 -- nvmf/common.sh@117 -- # sync 00:15:18.175 03:28:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.175 03:28:53 -- nvmf/common.sh@120 -- # set +e 00:15:18.175 03:28:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.175 03:28:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.175 rmmod nvme_tcp 00:15:18.175 rmmod nvme_fabrics 00:15:18.175 rmmod nvme_keyring 00:15:18.175 03:28:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.175 03:28:53 -- nvmf/common.sh@124 -- # set -e 00:15:18.175 03:28:53 -- nvmf/common.sh@125 -- # return 0 00:15:18.175 03:28:53 -- nvmf/common.sh@478 -- # '[' -n 259809 ']' 00:15:18.175 03:28:53 -- nvmf/common.sh@479 -- # killprocess 259809 00:15:18.175 03:28:53 -- common/autotest_common.sh@936 -- # '[' -z 259809 ']' 00:15:18.175 03:28:53 -- common/autotest_common.sh@940 -- # kill -0 259809 00:15:18.175 03:28:53 -- common/autotest_common.sh@941 -- # uname 00:15:18.175 03:28:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.175 03:28:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 259809 00:15:18.175 03:28:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:18.175 03:28:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:18.175 03:28:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 259809' 00:15:18.175 killing process with pid 259809 00:15:18.175 03:28:54 -- common/autotest_common.sh@955 -- # kill 259809 00:15:18.175 [2024-04-19 03:28:54.013052] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:18.175 03:28:54 -- common/autotest_common.sh@960 -- # wait 259809 00:15:18.175 03:28:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:18.175 03:28:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:18.175 03:28:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:18.175 03:28:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.175 03:28:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.175 03:28:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.175 03:28:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.175 03:28:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.130 03:28:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.130 03:28:56 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:19.130 00:15:19.130 real 0m17.167s 00:15:19.130 user 0m21.359s 00:15:19.130 sys 0m6.257s 00:15:19.130 03:28:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.130 03:28:56 -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 ************************************ 00:15:19.130 END TEST nvmf_fips 00:15:19.130 ************************************ 00:15:19.130 03:28:56 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:19.130 03:28:56 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:19.130 03:28:56 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:15:19.130 03:28:56 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:15:19.130 03:28:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.130 03:28:56 -- common/autotest_common.sh@10 -- # set +x 00:15:21.030 03:28:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:21.030 03:28:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.030 03:28:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.030 03:28:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.030 03:28:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.030 03:28:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.030 03:28:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.030 03:28:58 -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.030 03:28:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.030 03:28:58 -- nvmf/common.sh@296 -- # e810=() 00:15:21.030 03:28:58 -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.030 03:28:58 -- nvmf/common.sh@297 -- # x722=() 00:15:21.030 03:28:58 -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.030 03:28:58 -- nvmf/common.sh@298 -- # mlx=() 00:15:21.030 03:28:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.030 03:28:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.030 03:28:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.030 03:28:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.030 03:28:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.030 03:28:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.030 03:28:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:21.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:21.030 03:28:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.030 03:28:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:21.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:21.030 03:28:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.030 03:28:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.030 03:28:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.030 03:28:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.030 03:28:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:21.030 03:28:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.030 03:28:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:21.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:21.030 03:28:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.031 03:28:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.031 03:28:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.031 03:28:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:21.031 03:28:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.031 03:28:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:21.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:21.031 03:28:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.031 03:28:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:21.031 03:28:58 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.031 03:28:58 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:15:21.031 03:28:58 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:21.031 03:28:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:21.031 03:28:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.031 03:28:58 -- common/autotest_common.sh@10 -- # set +x 00:15:21.031 ************************************ 00:15:21.031 START TEST nvmf_perf_adq 00:15:21.031 ************************************ 00:15:21.031 03:28:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:21.031 * Looking for test storage... 00:15:21.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.031 03:28:58 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.031 03:28:58 -- nvmf/common.sh@7 -- # uname -s 00:15:21.031 03:28:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.031 03:28:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.031 03:28:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.031 03:28:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.031 03:28:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.031 03:28:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.031 03:28:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.031 03:28:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.031 03:28:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.031 03:28:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.031 03:28:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.031 03:28:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.031 03:28:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.031 03:28:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.031 03:28:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.031 03:28:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.031 03:28:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.031 03:28:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.031 03:28:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.031 03:28:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.031 03:28:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.031 03:28:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.031 03:28:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.031 03:28:58 -- paths/export.sh@5 -- # export PATH 00:15:21.031 03:28:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.031 03:28:58 -- nvmf/common.sh@47 -- # : 0 00:15:21.031 03:28:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.031 03:28:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.031 03:28:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.031 03:28:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.031 03:28:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.031 03:28:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.031 03:28:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.031 03:28:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.031 03:28:58 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:15:21.031 03:28:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.031 03:28:58 -- common/autotest_common.sh@10 -- # set +x 00:15:22.931 03:29:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:22.931 03:29:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.931 03:29:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.931 03:29:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.931 03:29:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.931 03:29:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.931 03:29:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.931 03:29:00 -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.931 03:29:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.931 03:29:00 -- nvmf/common.sh@296 -- # e810=() 00:15:22.931 03:29:00 -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.931 03:29:00 -- nvmf/common.sh@297 -- # x722=() 00:15:22.931 03:29:00 -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.931 03:29:00 -- nvmf/common.sh@298 -- # mlx=() 00:15:22.931 03:29:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.931 03:29:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.931 03:29:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.931 03:29:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.931 03:29:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.931 03:29:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.931 03:29:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:22.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:22.931 03:29:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.931 03:29:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:22.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:22.931 03:29:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.931 03:29:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.931 03:29:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.931 03:29:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.931 03:29:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:22.931 03:29:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.931 03:29:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:22.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:22.931 03:29:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.931 03:29:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.931 03:29:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.931 03:29:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:22.931 03:29:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.931 03:29:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:22.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:22.931 03:29:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.931 03:29:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:22.931 03:29:00 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.931 03:29:00 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:15:22.931 03:29:00 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:22.931 03:29:00 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:15:22.931 03:29:00 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:23.495 03:29:01 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:25.394 03:29:02 -- target/perf_adq.sh@54 -- # sleep 5 00:15:30.665 03:29:07 -- target/perf_adq.sh@67 -- # nvmftestinit 00:15:30.665 03:29:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:30.665 03:29:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.665 03:29:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:30.665 03:29:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:30.665 03:29:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:30.665 03:29:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.665 03:29:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.665 03:29:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.665 03:29:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:30.665 03:29:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.665 03:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:30.665 03:29:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:30.665 03:29:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.665 03:29:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.665 03:29:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.665 03:29:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.665 03:29:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.665 03:29:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.665 03:29:07 -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.665 03:29:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.665 03:29:07 -- nvmf/common.sh@296 -- # e810=() 00:15:30.665 03:29:07 -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.665 03:29:07 -- nvmf/common.sh@297 -- # x722=() 00:15:30.665 03:29:07 -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.665 03:29:07 -- nvmf/common.sh@298 -- # mlx=() 00:15:30.665 03:29:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.665 03:29:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.665 03:29:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.665 03:29:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.665 03:29:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.665 03:29:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.665 03:29:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:30.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:30.665 03:29:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.665 03:29:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:30.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:30.665 03:29:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.665 03:29:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.665 03:29:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.665 03:29:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:30.665 03:29:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.665 03:29:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:30.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:30.665 03:29:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.665 03:29:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.665 03:29:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.665 03:29:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:30.665 03:29:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.665 03:29:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:30.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:30.665 03:29:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.665 03:29:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:30.665 03:29:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:30.665 03:29:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:30.665 03:29:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:30.665 03:29:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.665 03:29:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.665 03:29:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.665 03:29:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.665 03:29:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.665 03:29:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.665 03:29:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.665 03:29:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.665 03:29:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.665 03:29:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.665 03:29:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.665 03:29:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.665 03:29:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.665 03:29:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.665 03:29:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.666 03:29:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.666 03:29:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.666 03:29:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.666 03:29:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.666 03:29:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:15:30.666 00:15:30.666 --- 10.0.0.2 ping statistics --- 00:15:30.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.666 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:30.666 03:29:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:15:30.666 00:15:30.666 --- 10.0.0.1 ping statistics --- 00:15:30.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.666 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:30.666 03:29:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.666 03:29:07 -- nvmf/common.sh@411 -- # return 0 00:15:30.666 03:29:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:30.666 03:29:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.666 03:29:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:30.666 03:29:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:30.666 03:29:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.666 03:29:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:30.666 03:29:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:30.666 03:29:07 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:30.666 03:29:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:30.666 03:29:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:30.666 03:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 03:29:07 -- nvmf/common.sh@470 -- # nvmfpid=265720 00:15:30.666 03:29:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:30.666 03:29:07 -- nvmf/common.sh@471 -- # waitforlisten 265720 00:15:30.666 03:29:07 -- common/autotest_common.sh@817 -- # '[' -z 265720 ']' 00:15:30.666 03:29:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.666 03:29:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.666 03:29:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.666 03:29:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.666 03:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 [2024-04-19 03:29:07.672403] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:15:30.666 [2024-04-19 03:29:07.672496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.666 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.666 [2024-04-19 03:29:07.735157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.666 [2024-04-19 03:29:07.843208] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.666 [2024-04-19 03:29:07.843257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.666 [2024-04-19 03:29:07.843270] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.666 [2024-04-19 03:29:07.843281] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.666 [2024-04-19 03:29:07.843290] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.666 [2024-04-19 03:29:07.843374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.666 [2024-04-19 03:29:07.843444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.666 [2024-04-19 03:29:07.843504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.666 [2024-04-19 03:29:07.843507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.666 03:29:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:30.666 03:29:07 -- common/autotest_common.sh@850 -- # return 0 00:15:30.666 03:29:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:30.666 03:29:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:30.666 03:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 03:29:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.666 03:29:07 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:15:30.666 03:29:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:15:30.666 03:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 03:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:07 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:30.666 03:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 03:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:08 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:15:30.666 03:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 [2024-04-19 03:29:08.010944] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.666 03:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:08 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:30.666 03:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 Malloc1 00:15:30.666 03:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:08 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.666 03:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 03:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:08 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:30.666 03:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 03:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:08 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.666 03:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.666 03:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 [2024-04-19 03:29:08.061598] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.666 03:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.666 03:29:08 -- target/perf_adq.sh@73 -- # perfpid=265742 00:15:30.666 03:29:08 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:30.666 03:29:08 -- target/perf_adq.sh@74 -- # sleep 2 00:15:30.666 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.567 03:29:10 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:15:32.567 03:29:10 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:15:32.567 03:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:32.567 03:29:10 -- target/perf_adq.sh@76 -- # wc -l 00:15:32.567 03:29:10 -- common/autotest_common.sh@10 -- # set +x 00:15:32.567 03:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:32.567 03:29:10 -- target/perf_adq.sh@76 -- # count=4 00:15:32.567 03:29:10 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:15:32.567 03:29:10 -- target/perf_adq.sh@81 -- # wait 265742 00:15:40.737 Initializing NVMe Controllers 00:15:40.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:40.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:15:40.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:15:40.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:15:40.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:15:40.737 Initialization complete. Launching workers. 00:15:40.737 ======================================================== 00:15:40.737 Latency(us) 00:15:40.737 Device Information : IOPS MiB/s Average min max 00:15:40.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10537.30 41.16 6074.28 2027.32 8849.11 00:15:40.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10405.20 40.65 6151.18 2660.03 8213.60 00:15:40.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10553.60 41.22 6065.91 5138.80 7472.96 00:15:40.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8504.20 33.22 7528.32 2227.23 13568.38 00:15:40.737 ======================================================== 00:15:40.737 Total : 40000.29 156.25 6401.21 2027.32 13568.38 00:15:40.737 00:15:40.737 03:29:18 -- target/perf_adq.sh@82 -- # nvmftestfini 00:15:40.737 03:29:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:40.737 03:29:18 -- nvmf/common.sh@117 -- # sync 00:15:40.737 03:29:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.737 03:29:18 -- nvmf/common.sh@120 -- # set +e 00:15:40.737 03:29:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.737 03:29:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.737 rmmod nvme_tcp 00:15:40.737 rmmod nvme_fabrics 00:15:40.737 rmmod nvme_keyring 00:15:40.737 03:29:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.737 03:29:18 -- nvmf/common.sh@124 -- # set -e 00:15:40.737 03:29:18 -- nvmf/common.sh@125 -- # return 0 00:15:40.737 03:29:18 -- nvmf/common.sh@478 -- # '[' -n 265720 ']' 00:15:40.738 03:29:18 -- nvmf/common.sh@479 -- # killprocess 265720 00:15:40.738 03:29:18 -- common/autotest_common.sh@936 -- # '[' -z 265720 ']' 00:15:40.738 03:29:18 -- common/autotest_common.sh@940 -- # kill -0 265720 00:15:40.738 03:29:18 -- common/autotest_common.sh@941 -- # uname 00:15:40.738 03:29:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.738 03:29:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 265720 00:15:40.996 03:29:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.996 03:29:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.996 03:29:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 265720' 00:15:40.996 killing process with pid 265720 00:15:40.996 03:29:18 -- common/autotest_common.sh@955 -- # kill 265720 00:15:40.996 03:29:18 -- common/autotest_common.sh@960 -- # wait 265720 00:15:41.255 03:29:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:41.255 03:29:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:41.255 03:29:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:41.255 03:29:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.255 03:29:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.255 03:29:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.255 03:29:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.255 03:29:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.159 03:29:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.159 03:29:20 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:15:43.159 03:29:20 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:43.725 03:29:21 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:45.627 03:29:22 -- target/perf_adq.sh@54 -- # sleep 5 00:15:50.901 03:29:27 -- target/perf_adq.sh@87 -- # nvmftestinit 00:15:50.901 03:29:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:50.901 03:29:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.901 03:29:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:50.901 03:29:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:50.901 03:29:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:50.901 03:29:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.901 03:29:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.901 03:29:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.901 03:29:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:50.901 03:29:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:50.901 03:29:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.901 03:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:50.901 03:29:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:50.901 03:29:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.901 03:29:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.901 03:29:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.901 03:29:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.902 03:29:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.902 03:29:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.902 03:29:27 -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.902 03:29:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.902 03:29:27 -- nvmf/common.sh@296 -- # e810=() 00:15:50.902 03:29:27 -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.902 03:29:27 -- nvmf/common.sh@297 -- # x722=() 00:15:50.902 03:29:27 -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.902 03:29:27 -- nvmf/common.sh@298 -- # mlx=() 00:15:50.902 03:29:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.902 03:29:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.902 03:29:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.902 03:29:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.902 03:29:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.902 03:29:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.902 03:29:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:50.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:50.902 03:29:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.902 03:29:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:50.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:50.902 03:29:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.902 03:29:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.902 03:29:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.902 03:29:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:50.902 03:29:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.902 03:29:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:50.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:50.902 03:29:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.902 03:29:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.902 03:29:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.902 03:29:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:50.902 03:29:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.902 03:29:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:50.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:50.902 03:29:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.902 03:29:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:50.902 03:29:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:50.902 03:29:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:50.902 03:29:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.902 03:29:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.902 03:29:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.902 03:29:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.902 03:29:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.902 03:29:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.902 03:29:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.902 03:29:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.902 03:29:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.902 03:29:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.902 03:29:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.902 03:29:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.902 03:29:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.902 03:29:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.902 03:29:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.902 03:29:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.902 03:29:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.902 03:29:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.902 03:29:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.902 03:29:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:15:50.902 00:15:50.902 --- 10.0.0.2 ping statistics --- 00:15:50.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.902 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:15:50.902 03:29:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:15:50.902 00:15:50.902 --- 10.0.0.1 ping statistics --- 00:15:50.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.902 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:50.902 03:29:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.902 03:29:27 -- nvmf/common.sh@411 -- # return 0 00:15:50.902 03:29:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:50.902 03:29:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.902 03:29:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:50.902 03:29:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.902 03:29:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:50.902 03:29:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:50.902 03:29:27 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:15:50.902 03:29:27 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:15:50.902 03:29:27 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:15:50.902 03:29:27 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:15:50.902 net.core.busy_poll = 1 00:15:50.902 03:29:27 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:15:50.902 net.core.busy_read = 1 00:15:50.902 03:29:27 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:15:50.902 03:29:27 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:15:50.902 03:29:28 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:15:50.902 03:29:28 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:15:50.902 03:29:28 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:15:50.902 03:29:28 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:50.902 03:29:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:50.902 03:29:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:50.902 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.902 03:29:28 -- nvmf/common.sh@470 -- # nvmfpid=268356 00:15:50.902 03:29:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:50.902 03:29:28 -- nvmf/common.sh@471 -- # waitforlisten 268356 00:15:50.902 03:29:28 -- common/autotest_common.sh@817 -- # '[' -z 268356 ']' 00:15:50.902 03:29:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.902 03:29:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.902 03:29:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.902 03:29:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.902 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.902 [2024-04-19 03:29:28.093037] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:15:50.902 [2024-04-19 03:29:28.093116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.902 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.902 [2024-04-19 03:29:28.159179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.902 [2024-04-19 03:29:28.270765] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.902 [2024-04-19 03:29:28.270820] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.902 [2024-04-19 03:29:28.270834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.902 [2024-04-19 03:29:28.270845] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.902 [2024-04-19 03:29:28.270855] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.902 [2024-04-19 03:29:28.270907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.902 [2024-04-19 03:29:28.270963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.902 [2024-04-19 03:29:28.271029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.903 [2024-04-19 03:29:28.271031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.903 03:29:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.903 03:29:28 -- common/autotest_common.sh@850 -- # return 0 00:15:50.903 03:29:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:50.903 03:29:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:50.903 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.903 03:29:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.903 03:29:28 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:15:50.903 03:29:28 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:15:50.903 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.903 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.903 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.903 03:29:28 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:50.903 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.903 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.903 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.903 03:29:28 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:15:50.903 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.903 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.903 [2024-04-19 03:29:28.447315] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.903 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.903 03:29:28 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:50.903 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.903 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:51.161 Malloc1 00:15:51.161 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.161 03:29:28 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.161 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.161 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:51.161 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.162 03:29:28 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:51.162 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.162 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:51.162 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.162 03:29:28 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.162 03:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.162 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:51.162 [2024-04-19 03:29:28.500733] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.162 03:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.162 03:29:28 -- target/perf_adq.sh@94 -- # perfpid=268391 00:15:51.162 03:29:28 -- target/perf_adq.sh@95 -- # sleep 2 00:15:51.162 03:29:28 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:51.162 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.068 03:29:30 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:15:53.068 03:29:30 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:15:53.068 03:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.068 03:29:30 -- target/perf_adq.sh@97 -- # wc -l 00:15:53.068 03:29:30 -- common/autotest_common.sh@10 -- # set +x 00:15:53.068 03:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.068 03:29:30 -- target/perf_adq.sh@97 -- # count=2 00:15:53.068 03:29:30 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:15:53.068 03:29:30 -- target/perf_adq.sh@103 -- # wait 268391 00:16:01.178 Initializing NVMe Controllers 00:16:01.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:01.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:01.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:01.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:01.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:01.179 Initialization complete. Launching workers. 00:16:01.179 ======================================================== 00:16:01.179 Latency(us) 00:16:01.179 Device Information : IOPS MiB/s Average min max 00:16:01.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4767.50 18.62 13467.82 1817.92 62314.43 00:16:01.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4657.40 18.19 13758.89 2071.88 62015.52 00:16:01.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12989.70 50.74 4926.69 1465.41 8182.62 00:16:01.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4611.60 18.01 13931.53 1997.45 62078.78 00:16:01.179 ======================================================== 00:16:01.179 Total : 27026.19 105.57 9491.95 1465.41 62314.43 00:16:01.179 00:16:01.179 03:29:38 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:01.179 03:29:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:01.179 03:29:38 -- nvmf/common.sh@117 -- # sync 00:16:01.179 03:29:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.179 03:29:38 -- nvmf/common.sh@120 -- # set +e 00:16:01.179 03:29:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.179 03:29:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.179 rmmod nvme_tcp 00:16:01.179 rmmod nvme_fabrics 00:16:01.179 rmmod nvme_keyring 00:16:01.436 03:29:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.436 03:29:38 -- nvmf/common.sh@124 -- # set -e 00:16:01.436 03:29:38 -- nvmf/common.sh@125 -- # return 0 00:16:01.436 03:29:38 -- nvmf/common.sh@478 -- # '[' -n 268356 ']' 00:16:01.436 03:29:38 -- nvmf/common.sh@479 -- # killprocess 268356 00:16:01.436 03:29:38 -- common/autotest_common.sh@936 -- # '[' -z 268356 ']' 00:16:01.436 03:29:38 -- common/autotest_common.sh@940 -- # kill -0 268356 00:16:01.436 03:29:38 -- common/autotest_common.sh@941 -- # uname 00:16:01.436 03:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.437 03:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 268356 00:16:01.437 03:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:01.437 03:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:01.437 03:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 268356' 00:16:01.437 killing process with pid 268356 00:16:01.437 03:29:38 -- common/autotest_common.sh@955 -- # kill 268356 00:16:01.437 03:29:38 -- common/autotest_common.sh@960 -- # wait 268356 00:16:01.696 03:29:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:01.696 03:29:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:01.696 03:29:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:01.696 03:29:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.696 03:29:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.696 03:29:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.696 03:29:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.696 03:29:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.600 03:29:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:03.600 03:29:41 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:03.600 00:16:03.600 real 0m42.731s 00:16:03.600 user 2m34.030s 00:16:03.600 sys 0m11.340s 00:16:03.600 03:29:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.600 03:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:03.600 ************************************ 00:16:03.600 END TEST nvmf_perf_adq 00:16:03.600 ************************************ 00:16:03.600 03:29:41 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:03.600 03:29:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:03.600 03:29:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.600 03:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:03.859 ************************************ 00:16:03.859 START TEST nvmf_shutdown 00:16:03.859 ************************************ 00:16:03.859 03:29:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:03.859 * Looking for test storage... 00:16:03.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.859 03:29:41 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.859 03:29:41 -- nvmf/common.sh@7 -- # uname -s 00:16:03.859 03:29:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.859 03:29:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.859 03:29:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.859 03:29:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.859 03:29:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.859 03:29:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.859 03:29:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.859 03:29:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.859 03:29:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.859 03:29:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.859 03:29:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.859 03:29:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.859 03:29:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.859 03:29:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.859 03:29:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.859 03:29:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.859 03:29:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.859 03:29:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.859 03:29:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.859 03:29:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.859 03:29:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.859 03:29:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.859 03:29:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.859 03:29:41 -- paths/export.sh@5 -- # export PATH 00:16:03.859 03:29:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.859 03:29:41 -- nvmf/common.sh@47 -- # : 0 00:16:03.859 03:29:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.859 03:29:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.859 03:29:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.859 03:29:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.859 03:29:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.859 03:29:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.859 03:29:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.859 03:29:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.859 03:29:41 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.859 03:29:41 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.859 03:29:41 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:03.859 03:29:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:03.859 03:29:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.859 03:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:03.859 ************************************ 00:16:03.859 START TEST nvmf_shutdown_tc1 00:16:03.859 ************************************ 00:16:03.859 03:29:41 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:03.859 03:29:41 -- target/shutdown.sh@74 -- # starttarget 00:16:03.859 03:29:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:03.859 03:29:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:03.859 03:29:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.859 03:29:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:03.859 03:29:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:03.859 03:29:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:03.859 03:29:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.859 03:29:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.859 03:29:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.859 03:29:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:03.859 03:29:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:03.859 03:29:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.859 03:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:06.392 03:29:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.392 03:29:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.392 03:29:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.392 03:29:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.392 03:29:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.392 03:29:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.392 03:29:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.392 03:29:43 -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.392 03:29:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.392 03:29:43 -- nvmf/common.sh@296 -- # e810=() 00:16:06.392 03:29:43 -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.392 03:29:43 -- nvmf/common.sh@297 -- # x722=() 00:16:06.392 03:29:43 -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.392 03:29:43 -- nvmf/common.sh@298 -- # mlx=() 00:16:06.392 03:29:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.392 03:29:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.392 03:29:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.392 03:29:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.392 03:29:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.392 03:29:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.392 03:29:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.392 03:29:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.392 03:29:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.392 03:29:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.392 03:29:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.392 03:29:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.392 03:29:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.392 03:29:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.392 03:29:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.392 03:29:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.392 03:29:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.392 03:29:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.392 03:29:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.392 03:29:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.393 03:29:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.393 03:29:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.393 03:29:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.393 03:29:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:06.393 03:29:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:06.393 03:29:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:06.393 03:29:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:06.393 03:29:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:06.393 03:29:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.393 03:29:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.393 03:29:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.393 03:29:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.393 03:29:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.393 03:29:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.393 03:29:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.393 03:29:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.393 03:29:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.393 03:29:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.393 03:29:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.393 03:29:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.393 03:29:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.393 03:29:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.393 03:29:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.393 03:29:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.393 03:29:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.393 03:29:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.393 03:29:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.393 03:29:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:16:06.393 00:16:06.393 --- 10.0.0.2 ping statistics --- 00:16:06.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.393 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:06.393 03:29:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:06.393 00:16:06.393 --- 10.0.0.1 ping statistics --- 00:16:06.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.393 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:06.393 03:29:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.393 03:29:43 -- nvmf/common.sh@411 -- # return 0 00:16:06.393 03:29:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:06.393 03:29:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.393 03:29:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:06.393 03:29:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:06.393 03:29:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.393 03:29:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:06.393 03:29:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:06.393 03:29:43 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:06.393 03:29:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:06.393 03:29:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:06.393 03:29:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.393 03:29:43 -- nvmf/common.sh@470 -- # nvmfpid=271676 00:16:06.393 03:29:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:06.393 03:29:43 -- nvmf/common.sh@471 -- # waitforlisten 271676 00:16:06.393 03:29:43 -- common/autotest_common.sh@817 -- # '[' -z 271676 ']' 00:16:06.393 03:29:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.393 03:29:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.393 03:29:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.393 03:29:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.393 03:29:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.393 [2024-04-19 03:29:43.556163] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:06.393 [2024-04-19 03:29:43.556243] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.393 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.393 [2024-04-19 03:29:43.624562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.393 [2024-04-19 03:29:43.742342] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.393 [2024-04-19 03:29:43.742426] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.393 [2024-04-19 03:29:43.742454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.393 [2024-04-19 03:29:43.742467] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.393 [2024-04-19 03:29:43.742479] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.393 [2024-04-19 03:29:43.742562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.393 [2024-04-19 03:29:43.742609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.393 [2024-04-19 03:29:43.742692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:06.393 [2024-04-19 03:29:43.742709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.959 03:29:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.959 03:29:44 -- common/autotest_common.sh@850 -- # return 0 00:16:06.959 03:29:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:06.959 03:29:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:06.959 03:29:44 -- common/autotest_common.sh@10 -- # set +x 00:16:06.959 03:29:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.959 03:29:44 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.959 03:29:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.959 03:29:44 -- common/autotest_common.sh@10 -- # set +x 00:16:06.959 [2024-04-19 03:29:44.498235] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.959 03:29:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.959 03:29:44 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:06.959 03:29:44 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:06.959 03:29:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:06.959 03:29:44 -- common/autotest_common.sh@10 -- # set +x 00:16:06.959 03:29:44 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:06.959 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:06.959 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:06.959 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:06.959 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:06.959 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:06.959 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:07.217 03:29:44 -- target/shutdown.sh@28 -- # cat 00:16:07.217 03:29:44 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:07.217 03:29:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.217 03:29:44 -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 Malloc1 00:16:07.217 [2024-04-19 03:29:44.573985] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.217 Malloc2 00:16:07.217 Malloc3 00:16:07.217 Malloc4 00:16:07.217 Malloc5 00:16:07.475 Malloc6 00:16:07.475 Malloc7 00:16:07.475 Malloc8 00:16:07.475 Malloc9 00:16:07.475 Malloc10 00:16:07.475 03:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.475 03:29:45 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:07.475 03:29:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:07.475 03:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:07.734 03:29:45 -- target/shutdown.sh@78 -- # perfpid=271865 00:16:07.734 03:29:45 -- target/shutdown.sh@79 -- # waitforlisten 271865 /var/tmp/bdevperf.sock 00:16:07.734 03:29:45 -- common/autotest_common.sh@817 -- # '[' -z 271865 ']' 00:16:07.734 03:29:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.734 03:29:45 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:07.734 03:29:45 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:07.734 03:29:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.734 03:29:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.734 03:29:45 -- nvmf/common.sh@521 -- # config=() 00:16:07.734 03:29:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.734 03:29:45 -- nvmf/common.sh@521 -- # local subsystem config 00:16:07.734 03:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.734 03:29:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.734 { 00:16:07.734 "params": { 00:16:07.734 "name": "Nvme$subsystem", 00:16:07.734 "trtype": "$TEST_TRANSPORT", 00:16:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.734 "adrfam": "ipv4", 00:16:07.734 "trsvcid": "$NVMF_PORT", 00:16:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.734 "hdgst": ${hdgst:-false}, 00:16:07.734 "ddgst": ${ddgst:-false} 00:16:07.734 }, 00:16:07.734 "method": "bdev_nvme_attach_controller" 00:16:07.734 } 00:16:07.734 EOF 00:16:07.734 )") 00:16:07.734 03:29:45 -- nvmf/common.sh@543 -- # cat 00:16:07.735 03:29:45 -- nvmf/common.sh@545 -- # jq . 00:16:07.735 03:29:45 -- nvmf/common.sh@546 -- # IFS=, 00:16:07.735 03:29:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme1", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme2", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme3", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme4", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme5", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme6", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme7", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme8", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme9", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 },{ 00:16:07.735 "params": { 00:16:07.735 "name": "Nvme10", 00:16:07.735 "trtype": "tcp", 00:16:07.735 "traddr": "10.0.0.2", 00:16:07.735 "adrfam": "ipv4", 00:16:07.735 "trsvcid": "4420", 00:16:07.735 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:07.735 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:07.735 "hdgst": false, 00:16:07.735 "ddgst": false 00:16:07.735 }, 00:16:07.735 "method": "bdev_nvme_attach_controller" 00:16:07.735 }' 00:16:07.735 [2024-04-19 03:29:45.085248] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:07.735 [2024-04-19 03:29:45.085318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:07.735 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.735 [2024-04-19 03:29:45.148376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.735 [2024-04-19 03:29:45.257877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.693 03:29:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:09.693 03:29:46 -- common/autotest_common.sh@850 -- # return 0 00:16:09.693 03:29:46 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:09.693 03:29:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.693 03:29:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.693 03:29:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.693 03:29:46 -- target/shutdown.sh@83 -- # kill -9 271865 00:16:09.693 03:29:46 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:09.693 03:29:46 -- target/shutdown.sh@87 -- # sleep 1 00:16:10.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 271865 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:10.626 03:29:47 -- target/shutdown.sh@88 -- # kill -0 271676 00:16:10.626 03:29:47 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:10.626 03:29:47 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:10.626 03:29:47 -- nvmf/common.sh@521 -- # config=() 00:16:10.626 03:29:47 -- nvmf/common.sh@521 -- # local subsystem config 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.626 "name": "Nvme$subsystem", 00:16:10.626 "trtype": "$TEST_TRANSPORT", 00:16:10.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.626 "adrfam": "ipv4", 00:16:10.626 "trsvcid": "$NVMF_PORT", 00:16:10.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.626 "hdgst": ${hdgst:-false}, 00:16:10.626 "ddgst": ${ddgst:-false} 00:16:10.626 }, 00:16:10.626 "method": "bdev_nvme_attach_controller" 00:16:10.626 } 00:16:10.626 EOF 00:16:10.626 )") 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.626 03:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.626 03:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.626 { 00:16:10.626 "params": { 00:16:10.627 "name": "Nvme$subsystem", 00:16:10.627 "trtype": "$TEST_TRANSPORT", 00:16:10.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "$NVMF_PORT", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.627 "hdgst": ${hdgst:-false}, 00:16:10.627 "ddgst": ${ddgst:-false} 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 } 00:16:10.627 EOF 00:16:10.627 )") 00:16:10.627 03:29:47 -- nvmf/common.sh@543 -- # cat 00:16:10.627 03:29:47 -- nvmf/common.sh@545 -- # jq . 00:16:10.627 03:29:47 -- nvmf/common.sh@546 -- # IFS=, 00:16:10.627 03:29:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme1", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme2", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme3", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme4", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme5", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme6", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme7", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme8", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme9", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 },{ 00:16:10.627 "params": { 00:16:10.627 "name": "Nvme10", 00:16:10.627 "trtype": "tcp", 00:16:10.627 "traddr": "10.0.0.2", 00:16:10.627 "adrfam": "ipv4", 00:16:10.627 "trsvcid": "4420", 00:16:10.627 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:10.627 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:10.627 "hdgst": false, 00:16:10.627 "ddgst": false 00:16:10.627 }, 00:16:10.627 "method": "bdev_nvme_attach_controller" 00:16:10.627 }' 00:16:10.627 [2024-04-19 03:29:47.927610] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:10.627 [2024-04-19 03:29:47.927702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272176 ] 00:16:10.627 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.627 [2024-04-19 03:29:47.993870] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.627 [2024-04-19 03:29:48.107096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.526 Running I/O for 1 seconds... 00:16:13.460 00:16:13.460 Latency(us) 00:16:13.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.460 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme1n1 : 1.16 220.26 13.77 0.00 0.00 287071.19 22039.51 256318.58 00:16:13.460 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme2n1 : 1.16 220.54 13.78 0.00 0.00 282808.89 22039.51 243891.01 00:16:13.460 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme3n1 : 1.17 273.04 17.06 0.00 0.00 224734.63 17573.36 250104.79 00:16:13.460 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme4n1 : 1.08 236.18 14.76 0.00 0.00 254523.73 17282.09 253211.69 00:16:13.460 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme5n1 : 1.16 219.79 13.74 0.00 0.00 270073.93 20777.34 254765.13 00:16:13.460 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme6n1 : 1.18 217.07 13.57 0.00 0.00 269207.70 22136.60 251658.24 00:16:13.460 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme7n1 : 1.19 268.98 16.81 0.00 0.00 213522.96 15922.82 253211.69 00:16:13.460 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme8n1 : 1.20 267.69 16.73 0.00 0.00 211347.99 16214.09 259425.47 00:16:13.460 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme9n1 : 1.18 220.08 13.76 0.00 0.00 251205.37 4781.70 276513.37 00:16:13.460 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.460 Verification LBA range: start 0x0 length 0x400 00:16:13.460 Nvme10n1 : 1.19 215.35 13.46 0.00 0.00 253491.77 23010.42 284280.60 00:16:13.460 =================================================================================================================== 00:16:13.460 Total : 2358.98 147.44 0.00 0.00 249341.27 4781.70 284280.60 00:16:13.718 03:29:51 -- target/shutdown.sh@94 -- # stoptarget 00:16:13.718 03:29:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:13.718 03:29:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:13.718 03:29:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:13.718 03:29:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:13.718 03:29:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:13.718 03:29:51 -- nvmf/common.sh@117 -- # sync 00:16:13.718 03:29:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.718 03:29:51 -- nvmf/common.sh@120 -- # set +e 00:16:13.718 03:29:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.718 03:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.718 rmmod nvme_tcp 00:16:13.718 rmmod nvme_fabrics 00:16:13.718 rmmod nvme_keyring 00:16:13.718 03:29:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.718 03:29:51 -- nvmf/common.sh@124 -- # set -e 00:16:13.718 03:29:51 -- nvmf/common.sh@125 -- # return 0 00:16:13.718 03:29:51 -- nvmf/common.sh@478 -- # '[' -n 271676 ']' 00:16:13.718 03:29:51 -- nvmf/common.sh@479 -- # killprocess 271676 00:16:13.718 03:29:51 -- common/autotest_common.sh@936 -- # '[' -z 271676 ']' 00:16:13.718 03:29:51 -- common/autotest_common.sh@940 -- # kill -0 271676 00:16:13.718 03:29:51 -- common/autotest_common.sh@941 -- # uname 00:16:13.718 03:29:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.718 03:29:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 271676 00:16:13.976 03:29:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:13.976 03:29:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:13.976 03:29:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 271676' 00:16:13.976 killing process with pid 271676 00:16:13.976 03:29:51 -- common/autotest_common.sh@955 -- # kill 271676 00:16:13.976 03:29:51 -- common/autotest_common.sh@960 -- # wait 271676 00:16:14.543 03:29:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:14.543 03:29:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:14.543 03:29:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:14.543 03:29:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.543 03:29:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.543 03:29:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.543 03:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.543 03:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.447 03:29:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:16.447 00:16:16.447 real 0m12.504s 00:16:16.447 user 0m36.717s 00:16:16.447 sys 0m3.392s 00:16:16.447 03:29:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:16.447 03:29:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.447 ************************************ 00:16:16.447 END TEST nvmf_shutdown_tc1 00:16:16.447 ************************************ 00:16:16.447 03:29:53 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:16.447 03:29:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:16.447 03:29:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.447 03:29:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.706 ************************************ 00:16:16.706 START TEST nvmf_shutdown_tc2 00:16:16.706 ************************************ 00:16:16.706 03:29:54 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:16.706 03:29:54 -- target/shutdown.sh@99 -- # starttarget 00:16:16.706 03:29:54 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:16.706 03:29:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:16.706 03:29:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.706 03:29:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:16.706 03:29:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:16.706 03:29:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:16.706 03:29:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.706 03:29:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.706 03:29:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.706 03:29:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:16.706 03:29:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:16.706 03:29:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.706 03:29:54 -- common/autotest_common.sh@10 -- # set +x 00:16:16.706 03:29:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:16.706 03:29:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.706 03:29:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.706 03:29:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.706 03:29:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.706 03:29:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.706 03:29:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.706 03:29:54 -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.706 03:29:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.706 03:29:54 -- nvmf/common.sh@296 -- # e810=() 00:16:16.706 03:29:54 -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.706 03:29:54 -- nvmf/common.sh@297 -- # x722=() 00:16:16.706 03:29:54 -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.706 03:29:54 -- nvmf/common.sh@298 -- # mlx=() 00:16:16.706 03:29:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.706 03:29:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.706 03:29:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.706 03:29:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.706 03:29:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.706 03:29:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.707 03:29:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.707 03:29:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.707 03:29:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.707 03:29:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.707 03:29:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.707 03:29:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.707 03:29:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.707 03:29:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.707 03:29:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:16.707 03:29:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.707 03:29:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.707 03:29:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.707 03:29:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.707 03:29:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.707 03:29:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:16.707 03:29:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.707 03:29:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.707 03:29:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.707 03:29:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:16.707 03:29:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:16.707 03:29:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:16.707 03:29:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.707 03:29:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.707 03:29:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.707 03:29:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.707 03:29:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.707 03:29:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.707 03:29:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.707 03:29:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.707 03:29:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.707 03:29:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.707 03:29:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.707 03:29:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.707 03:29:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.707 03:29:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.707 03:29:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.707 03:29:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.707 03:29:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.707 03:29:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.707 03:29:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.707 03:29:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:16.707 00:16:16.707 --- 10.0.0.2 ping statistics --- 00:16:16.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.707 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:16.707 03:29:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:16:16.707 00:16:16.707 --- 10.0.0.1 ping statistics --- 00:16:16.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.707 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:16.707 03:29:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.707 03:29:54 -- nvmf/common.sh@411 -- # return 0 00:16:16.707 03:29:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:16.707 03:29:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.707 03:29:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:16.707 03:29:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.707 03:29:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:16.707 03:29:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:16.707 03:29:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:16.707 03:29:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:16.707 03:29:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:16.707 03:29:54 -- common/autotest_common.sh@10 -- # set +x 00:16:16.707 03:29:54 -- nvmf/common.sh@470 -- # nvmfpid=273068 00:16:16.707 03:29:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:16.707 03:29:54 -- nvmf/common.sh@471 -- # waitforlisten 273068 00:16:16.707 03:29:54 -- common/autotest_common.sh@817 -- # '[' -z 273068 ']' 00:16:16.707 03:29:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.707 03:29:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:16.707 03:29:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.707 03:29:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:16.707 03:29:54 -- common/autotest_common.sh@10 -- # set +x 00:16:16.707 [2024-04-19 03:29:54.236700] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:16.707 [2024-04-19 03:29:54.236790] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.966 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.966 [2024-04-19 03:29:54.305074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.966 [2024-04-19 03:29:54.421151] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.966 [2024-04-19 03:29:54.421228] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.966 [2024-04-19 03:29:54.421244] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.966 [2024-04-19 03:29:54.421257] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.966 [2024-04-19 03:29:54.421269] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.966 [2024-04-19 03:29:54.421370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.966 [2024-04-19 03:29:54.421408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.966 [2024-04-19 03:29:54.421534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.966 [2024-04-19 03:29:54.421537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.901 03:29:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:17.901 03:29:55 -- common/autotest_common.sh@850 -- # return 0 00:16:17.901 03:29:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:17.901 03:29:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:17.901 03:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.901 03:29:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.901 03:29:55 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.901 03:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.901 03:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.901 [2024-04-19 03:29:55.205192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.901 03:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.901 03:29:55 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:17.901 03:29:55 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:17.901 03:29:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:17.901 03:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.901 03:29:55 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:17.901 03:29:55 -- target/shutdown.sh@28 -- # cat 00:16:17.901 03:29:55 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:17.901 03:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.901 03:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.901 Malloc1 00:16:17.901 [2024-04-19 03:29:55.280408] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.901 Malloc2 00:16:17.901 Malloc3 00:16:17.901 Malloc4 00:16:17.901 Malloc5 00:16:18.159 Malloc6 00:16:18.159 Malloc7 00:16:18.159 Malloc8 00:16:18.159 Malloc9 00:16:18.159 Malloc10 00:16:18.159 03:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.159 03:29:55 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:18.418 03:29:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:18.418 03:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:18.418 03:29:55 -- target/shutdown.sh@103 -- # perfpid=273258 00:16:18.418 03:29:55 -- target/shutdown.sh@104 -- # waitforlisten 273258 /var/tmp/bdevperf.sock 00:16:18.418 03:29:55 -- common/autotest_common.sh@817 -- # '[' -z 273258 ']' 00:16:18.418 03:29:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.418 03:29:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.418 03:29:55 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:18.418 03:29:55 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:18.418 03:29:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.418 03:29:55 -- nvmf/common.sh@521 -- # config=() 00:16:18.418 03:29:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.418 03:29:55 -- nvmf/common.sh@521 -- # local subsystem config 00:16:18.418 03:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.418 EOF 00:16:18.418 )") 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.418 03:29:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:18.418 03:29:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:18.418 { 00:16:18.418 "params": { 00:16:18.418 "name": "Nvme$subsystem", 00:16:18.418 "trtype": "$TEST_TRANSPORT", 00:16:18.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.418 "adrfam": "ipv4", 00:16:18.418 "trsvcid": "$NVMF_PORT", 00:16:18.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.418 "hdgst": ${hdgst:-false}, 00:16:18.418 "ddgst": ${ddgst:-false} 00:16:18.418 }, 00:16:18.418 "method": "bdev_nvme_attach_controller" 00:16:18.418 } 00:16:18.419 EOF 00:16:18.419 )") 00:16:18.419 03:29:55 -- nvmf/common.sh@543 -- # cat 00:16:18.419 03:29:55 -- nvmf/common.sh@545 -- # jq . 00:16:18.419 03:29:55 -- nvmf/common.sh@546 -- # IFS=, 00:16:18.419 03:29:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme1", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme2", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme3", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme4", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme5", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme6", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme7", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme8", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme9", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 },{ 00:16:18.419 "params": { 00:16:18.419 "name": "Nvme10", 00:16:18.419 "trtype": "tcp", 00:16:18.419 "traddr": "10.0.0.2", 00:16:18.419 "adrfam": "ipv4", 00:16:18.419 "trsvcid": "4420", 00:16:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:18.419 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:18.419 "hdgst": false, 00:16:18.419 "ddgst": false 00:16:18.419 }, 00:16:18.419 "method": "bdev_nvme_attach_controller" 00:16:18.419 }' 00:16:18.419 [2024-04-19 03:29:55.780755] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:18.419 [2024-04-19 03:29:55.780843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273258 ] 00:16:18.419 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.419 [2024-04-19 03:29:55.848588] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.419 [2024-04-19 03:29:55.959503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.793 Running I/O for 10 seconds... 00:16:20.359 03:29:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:20.359 03:29:57 -- common/autotest_common.sh@850 -- # return 0 00:16:20.359 03:29:57 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:20.359 03:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.359 03:29:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.359 03:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.359 03:29:57 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:20.359 03:29:57 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:20.359 03:29:57 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:20.359 03:29:57 -- target/shutdown.sh@57 -- # local ret=1 00:16:20.359 03:29:57 -- target/shutdown.sh@58 -- # local i 00:16:20.359 03:29:57 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:20.359 03:29:57 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:20.359 03:29:57 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:20.359 03:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.359 03:29:57 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:20.359 03:29:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.359 03:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.359 03:29:57 -- target/shutdown.sh@60 -- # read_io_count=67 00:16:20.359 03:29:57 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:16:20.359 03:29:57 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:20.617 03:29:58 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:20.617 03:29:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:20.617 03:29:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:20.617 03:29:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:20.617 03:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.617 03:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.617 03:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.617 03:29:58 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:20.617 03:29:58 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:20.617 03:29:58 -- target/shutdown.sh@64 -- # ret=0 00:16:20.617 03:29:58 -- target/shutdown.sh@65 -- # break 00:16:20.617 03:29:58 -- target/shutdown.sh@69 -- # return 0 00:16:20.617 03:29:58 -- target/shutdown.sh@110 -- # killprocess 273258 00:16:20.617 03:29:58 -- common/autotest_common.sh@936 -- # '[' -z 273258 ']' 00:16:20.617 03:29:58 -- common/autotest_common.sh@940 -- # kill -0 273258 00:16:20.617 03:29:58 -- common/autotest_common.sh@941 -- # uname 00:16:20.617 03:29:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.617 03:29:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 273258 00:16:20.617 03:29:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:20.617 03:29:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:20.617 03:29:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 273258' 00:16:20.617 killing process with pid 273258 00:16:20.617 03:29:58 -- common/autotest_common.sh@955 -- # kill 273258 00:16:20.617 03:29:58 -- common/autotest_common.sh@960 -- # wait 273258 00:16:20.875 Received shutdown signal, test time was about 1.072718 seconds 00:16:20.875 00:16:20.875 Latency(us) 00:16:20.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme1n1 : 1.04 184.66 11.54 0.00 0.00 343022.30 20194.80 284280.60 00:16:20.876 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme2n1 : 1.05 242.76 15.17 0.00 0.00 256598.09 19612.25 253211.69 00:16:20.876 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme3n1 : 1.03 248.94 15.56 0.00 0.00 245661.77 34758.35 229910.00 00:16:20.876 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme4n1 : 1.07 299.66 18.73 0.00 0.00 200848.42 17961.72 236123.78 00:16:20.876 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme5n1 : 1.05 183.11 11.44 0.00 0.00 322465.19 23204.60 316902.97 00:16:20.876 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme6n1 : 1.06 240.61 15.04 0.00 0.00 241291.00 19126.80 253211.69 00:16:20.876 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme7n1 : 1.06 241.59 15.10 0.00 0.00 235776.38 21942.42 229910.00 00:16:20.876 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme8n1 : 1.03 249.70 15.61 0.00 0.00 222954.95 17864.63 246997.90 00:16:20.876 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme9n1 : 1.05 185.62 11.60 0.00 0.00 293735.28 5218.61 312242.63 00:16:20.876 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.876 Verification LBA range: start 0x0 length 0x400 00:16:20.876 Nvme10n1 : 1.07 238.82 14.93 0.00 0.00 225861.78 22913.33 284280.60 00:16:20.876 =================================================================================================================== 00:16:20.876 Total : 2315.48 144.72 0.00 0.00 252537.30 5218.61 316902.97 00:16:21.133 03:29:58 -- target/shutdown.sh@113 -- # sleep 1 00:16:22.064 03:29:59 -- target/shutdown.sh@114 -- # kill -0 273068 00:16:22.065 03:29:59 -- target/shutdown.sh@116 -- # stoptarget 00:16:22.065 03:29:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:22.065 03:29:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:22.065 03:29:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:22.065 03:29:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:22.065 03:29:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:22.065 03:29:59 -- nvmf/common.sh@117 -- # sync 00:16:22.065 03:29:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.065 03:29:59 -- nvmf/common.sh@120 -- # set +e 00:16:22.065 03:29:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.065 03:29:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.065 rmmod nvme_tcp 00:16:22.065 rmmod nvme_fabrics 00:16:22.065 rmmod nvme_keyring 00:16:22.065 03:29:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.065 03:29:59 -- nvmf/common.sh@124 -- # set -e 00:16:22.065 03:29:59 -- nvmf/common.sh@125 -- # return 0 00:16:22.065 03:29:59 -- nvmf/common.sh@478 -- # '[' -n 273068 ']' 00:16:22.065 03:29:59 -- nvmf/common.sh@479 -- # killprocess 273068 00:16:22.065 03:29:59 -- common/autotest_common.sh@936 -- # '[' -z 273068 ']' 00:16:22.065 03:29:59 -- common/autotest_common.sh@940 -- # kill -0 273068 00:16:22.065 03:29:59 -- common/autotest_common.sh@941 -- # uname 00:16:22.065 03:29:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.065 03:29:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 273068 00:16:22.065 03:29:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:22.065 03:29:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:22.065 03:29:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 273068' 00:16:22.065 killing process with pid 273068 00:16:22.065 03:29:59 -- common/autotest_common.sh@955 -- # kill 273068 00:16:22.065 03:29:59 -- common/autotest_common.sh@960 -- # wait 273068 00:16:22.630 03:30:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:22.630 03:30:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:22.630 03:30:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:22.630 03:30:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.630 03:30:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.630 03:30:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.630 03:30:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.630 03:30:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.162 03:30:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:25.162 00:16:25.162 real 0m8.159s 00:16:25.162 user 0m24.629s 00:16:25.162 sys 0m1.644s 00:16:25.162 03:30:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.162 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.162 ************************************ 00:16:25.162 END TEST nvmf_shutdown_tc2 00:16:25.162 ************************************ 00:16:25.162 03:30:02 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:25.162 03:30:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.162 03:30:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.162 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.162 ************************************ 00:16:25.162 START TEST nvmf_shutdown_tc3 00:16:25.162 ************************************ 00:16:25.162 03:30:02 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:25.162 03:30:02 -- target/shutdown.sh@121 -- # starttarget 00:16:25.162 03:30:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:25.162 03:30:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:25.162 03:30:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.162 03:30:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:25.162 03:30:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:25.162 03:30:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:25.162 03:30:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.162 03:30:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.162 03:30:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.162 03:30:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:25.162 03:30:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:25.162 03:30:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:25.162 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.162 03:30:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:25.162 03:30:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.162 03:30:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.162 03:30:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.162 03:30:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.162 03:30:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.162 03:30:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.162 03:30:02 -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.162 03:30:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.162 03:30:02 -- nvmf/common.sh@296 -- # e810=() 00:16:25.162 03:30:02 -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.162 03:30:02 -- nvmf/common.sh@297 -- # x722=() 00:16:25.162 03:30:02 -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.162 03:30:02 -- nvmf/common.sh@298 -- # mlx=() 00:16:25.162 03:30:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.162 03:30:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.162 03:30:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.162 03:30:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.162 03:30:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.162 03:30:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.162 03:30:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.162 03:30:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.162 03:30:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.162 03:30:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:25.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:25.162 03:30:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.163 03:30:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:25.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:25.163 03:30:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.163 03:30:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.163 03:30:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.163 03:30:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:25.163 03:30:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.163 03:30:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:25.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:25.163 03:30:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.163 03:30:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.163 03:30:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.163 03:30:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:25.163 03:30:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.163 03:30:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:25.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:25.163 03:30:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.163 03:30:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:25.163 03:30:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:25.163 03:30:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:25.163 03:30:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.163 03:30:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.163 03:30:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.163 03:30:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.163 03:30:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.163 03:30:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.163 03:30:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.163 03:30:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.163 03:30:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.163 03:30:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.163 03:30:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.163 03:30:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.163 03:30:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.163 03:30:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.163 03:30:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.163 03:30:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.163 03:30:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.163 03:30:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.163 03:30:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.163 03:30:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:25.163 00:16:25.163 --- 10.0.0.2 ping statistics --- 00:16:25.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.163 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:25.163 03:30:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:16:25.163 00:16:25.163 --- 10.0.0.1 ping statistics --- 00:16:25.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.163 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:25.163 03:30:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.163 03:30:02 -- nvmf/common.sh@411 -- # return 0 00:16:25.163 03:30:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:25.163 03:30:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.163 03:30:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:25.163 03:30:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.163 03:30:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:25.163 03:30:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:25.163 03:30:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:25.163 03:30:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:25.163 03:30:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:25.163 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 03:30:02 -- nvmf/common.sh@470 -- # nvmfpid=274286 00:16:25.163 03:30:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:25.163 03:30:02 -- nvmf/common.sh@471 -- # waitforlisten 274286 00:16:25.163 03:30:02 -- common/autotest_common.sh@817 -- # '[' -z 274286 ']' 00:16:25.163 03:30:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.163 03:30:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.163 03:30:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.163 03:30:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.163 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 [2024-04-19 03:30:02.510939] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:25.163 [2024-04-19 03:30:02.511022] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.163 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.163 [2024-04-19 03:30:02.581944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.163 [2024-04-19 03:30:02.701733] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.163 [2024-04-19 03:30:02.701804] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.163 [2024-04-19 03:30:02.701821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.163 [2024-04-19 03:30:02.701834] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.163 [2024-04-19 03:30:02.701846] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.163 [2024-04-19 03:30:02.701955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.163 [2024-04-19 03:30:02.701979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.163 [2024-04-19 03:30:02.702009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:25.163 [2024-04-19 03:30:02.702012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.422 03:30:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:25.422 03:30:02 -- common/autotest_common.sh@850 -- # return 0 00:16:25.422 03:30:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:25.422 03:30:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:25.422 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.422 03:30:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.422 03:30:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.422 03:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.422 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.422 [2024-04-19 03:30:02.856221] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.422 03:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.422 03:30:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:25.422 03:30:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:25.422 03:30:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:25.422 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.422 03:30:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:25.422 03:30:02 -- target/shutdown.sh@28 -- # cat 00:16:25.422 03:30:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:25.422 03:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.422 03:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.422 Malloc1 00:16:25.422 [2024-04-19 03:30:02.946070] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.422 Malloc2 00:16:25.680 Malloc3 00:16:25.680 Malloc4 00:16:25.680 Malloc5 00:16:25.680 Malloc6 00:16:25.680 Malloc7 00:16:25.938 Malloc8 00:16:25.938 Malloc9 00:16:25.938 Malloc10 00:16:25.938 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.938 03:30:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:25.938 03:30:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:25.938 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:25.938 03:30:03 -- target/shutdown.sh@125 -- # perfpid=274466 00:16:25.938 03:30:03 -- target/shutdown.sh@126 -- # waitforlisten 274466 /var/tmp/bdevperf.sock 00:16:25.938 03:30:03 -- common/autotest_common.sh@817 -- # '[' -z 274466 ']' 00:16:25.938 03:30:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.938 03:30:03 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:25.938 03:30:03 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:25.938 03:30:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.938 03:30:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.938 03:30:03 -- nvmf/common.sh@521 -- # config=() 00:16:25.938 03:30:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.938 03:30:03 -- nvmf/common.sh@521 -- # local subsystem config 00:16:25.938 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:25.938 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.938 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.938 { 00:16:25.938 "params": { 00:16:25.938 "name": "Nvme$subsystem", 00:16:25.938 "trtype": "$TEST_TRANSPORT", 00:16:25.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.938 "adrfam": "ipv4", 00:16:25.938 "trsvcid": "$NVMF_PORT", 00:16:25.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.938 "hdgst": ${hdgst:-false}, 00:16:25.938 "ddgst": ${ddgst:-false} 00:16:25.938 }, 00:16:25.938 "method": "bdev_nvme_attach_controller" 00:16:25.938 } 00:16:25.938 EOF 00:16:25.938 )") 00:16:25.938 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.939 { 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme$subsystem", 00:16:25.939 "trtype": "$TEST_TRANSPORT", 00:16:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "$NVMF_PORT", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.939 "hdgst": ${hdgst:-false}, 00:16:25.939 "ddgst": ${ddgst:-false} 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 } 00:16:25.939 EOF 00:16:25.939 )") 00:16:25.939 03:30:03 -- nvmf/common.sh@543 -- # cat 00:16:25.939 03:30:03 -- nvmf/common.sh@545 -- # jq . 00:16:25.939 03:30:03 -- nvmf/common.sh@546 -- # IFS=, 00:16:25.939 03:30:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme1", 00:16:25.939 "trtype": "tcp", 00:16:25.939 "traddr": "10.0.0.2", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "4420", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.939 "hdgst": false, 00:16:25.939 "ddgst": false 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 },{ 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme2", 00:16:25.939 "trtype": "tcp", 00:16:25.939 "traddr": "10.0.0.2", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "4420", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:25.939 "hdgst": false, 00:16:25.939 "ddgst": false 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 },{ 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme3", 00:16:25.939 "trtype": "tcp", 00:16:25.939 "traddr": "10.0.0.2", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "4420", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:25.939 "hdgst": false, 00:16:25.939 "ddgst": false 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 },{ 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme4", 00:16:25.939 "trtype": "tcp", 00:16:25.939 "traddr": "10.0.0.2", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "4420", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:25.939 "hdgst": false, 00:16:25.939 "ddgst": false 00:16:25.939 }, 00:16:25.939 "method": "bdev_nvme_attach_controller" 00:16:25.939 },{ 00:16:25.939 "params": { 00:16:25.939 "name": "Nvme5", 00:16:25.939 "trtype": "tcp", 00:16:25.939 "traddr": "10.0.0.2", 00:16:25.939 "adrfam": "ipv4", 00:16:25.939 "trsvcid": "4420", 00:16:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:25.939 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:25.939 "hdgst": false, 00:16:25.940 "ddgst": false 00:16:25.940 }, 00:16:25.940 "method": "bdev_nvme_attach_controller" 00:16:25.940 },{ 00:16:25.940 "params": { 00:16:25.940 "name": "Nvme6", 00:16:25.940 "trtype": "tcp", 00:16:25.940 "traddr": "10.0.0.2", 00:16:25.940 "adrfam": "ipv4", 00:16:25.940 "trsvcid": "4420", 00:16:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:25.940 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:25.940 "hdgst": false, 00:16:25.940 "ddgst": false 00:16:25.940 }, 00:16:25.940 "method": "bdev_nvme_attach_controller" 00:16:25.940 },{ 00:16:25.940 "params": { 00:16:25.940 "name": "Nvme7", 00:16:25.940 "trtype": "tcp", 00:16:25.940 "traddr": "10.0.0.2", 00:16:25.940 "adrfam": "ipv4", 00:16:25.940 "trsvcid": "4420", 00:16:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:25.940 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:25.940 "hdgst": false, 00:16:25.940 "ddgst": false 00:16:25.940 }, 00:16:25.940 "method": "bdev_nvme_attach_controller" 00:16:25.940 },{ 00:16:25.940 "params": { 00:16:25.940 "name": "Nvme8", 00:16:25.940 "trtype": "tcp", 00:16:25.940 "traddr": "10.0.0.2", 00:16:25.940 "adrfam": "ipv4", 00:16:25.940 "trsvcid": "4420", 00:16:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:25.940 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:25.940 "hdgst": false, 00:16:25.940 "ddgst": false 00:16:25.940 }, 00:16:25.940 "method": "bdev_nvme_attach_controller" 00:16:25.940 },{ 00:16:25.940 "params": { 00:16:25.940 "name": "Nvme9", 00:16:25.940 "trtype": "tcp", 00:16:25.940 "traddr": "10.0.0.2", 00:16:25.940 "adrfam": "ipv4", 00:16:25.940 "trsvcid": "4420", 00:16:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:25.940 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:25.940 "hdgst": false, 00:16:25.940 "ddgst": false 00:16:25.940 }, 00:16:25.940 "method": "bdev_nvme_attach_controller" 00:16:25.940 },{ 00:16:25.940 "params": { 00:16:25.940 "name": "Nvme10", 00:16:25.940 "trtype": "tcp", 00:16:25.940 "traddr": "10.0.0.2", 00:16:25.940 "adrfam": "ipv4", 00:16:25.940 "trsvcid": "4420", 00:16:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:25.940 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:25.940 "hdgst": false, 00:16:25.940 "ddgst": false 00:16:25.940 }, 00:16:25.940 "method": "bdev_nvme_attach_controller" 00:16:25.940 }' 00:16:25.940 [2024-04-19 03:30:03.449355] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:25.940 [2024-04-19 03:30:03.449468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274466 ] 00:16:25.940 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.198 [2024-04-19 03:30:03.512488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.198 [2024-04-19 03:30:03.620364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.570 Running I/O for 10 seconds... 00:16:28.135 03:30:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:28.135 03:30:05 -- common/autotest_common.sh@850 -- # return 0 00:16:28.135 03:30:05 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:28.135 03:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.135 03:30:05 -- common/autotest_common.sh@10 -- # set +x 00:16:28.135 03:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.135 03:30:05 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.135 03:30:05 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:28.135 03:30:05 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:28.135 03:30:05 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:28.136 03:30:05 -- target/shutdown.sh@57 -- # local ret=1 00:16:28.136 03:30:05 -- target/shutdown.sh@58 -- # local i 00:16:28.136 03:30:05 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:28.136 03:30:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:28.136 03:30:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:28.136 03:30:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:28.136 03:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.136 03:30:05 -- common/autotest_common.sh@10 -- # set +x 00:16:28.136 03:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.136 03:30:05 -- target/shutdown.sh@60 -- # read_io_count=67 00:16:28.136 03:30:05 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:16:28.136 03:30:05 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:28.411 03:30:05 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:28.411 03:30:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:28.411 03:30:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:28.411 03:30:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:28.411 03:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.411 03:30:05 -- common/autotest_common.sh@10 -- # set +x 00:16:28.411 03:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.411 03:30:05 -- target/shutdown.sh@60 -- # read_io_count=135 00:16:28.411 03:30:05 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:16:28.411 03:30:05 -- target/shutdown.sh@64 -- # ret=0 00:16:28.411 03:30:05 -- target/shutdown.sh@65 -- # break 00:16:28.411 03:30:05 -- target/shutdown.sh@69 -- # return 0 00:16:28.411 03:30:05 -- target/shutdown.sh@135 -- # killprocess 274286 00:16:28.411 03:30:05 -- common/autotest_common.sh@936 -- # '[' -z 274286 ']' 00:16:28.411 03:30:05 -- common/autotest_common.sh@940 -- # kill -0 274286 00:16:28.411 03:30:05 -- common/autotest_common.sh@941 -- # uname 00:16:28.411 03:30:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.411 03:30:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 274286 00:16:28.411 03:30:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:28.411 03:30:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:28.411 03:30:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 274286' 00:16:28.411 killing process with pid 274286 00:16:28.411 03:30:05 -- common/autotest_common.sh@955 -- # kill 274286 00:16:28.411 03:30:05 -- common/autotest_common.sh@960 -- # wait 274286 00:16:28.411 [2024-04-19 03:30:05.810557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.411 [2024-04-19 03:30:05.810737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.810992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.811555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcb80 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.812944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.812976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.812991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.412 [2024-04-19 03:30:05.813313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.813767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff4d0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.816981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.413 [2024-04-19 03:30:05.817806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.817982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.818177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd4c0 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.819995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.820463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd950 is same with the state(5) to be set 00:16:28.414 [2024-04-19 03:30:05.821781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.414 [2024-04-19 03:30:05.821823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.821840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.821854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.821868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.821880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.821894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.821913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.821926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77880 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.821977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.821998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d390 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.822146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ae060 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.822311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2554d80 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.822495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.415 [2024-04-19 03:30:05.822603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.822615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9ac0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12[2024-04-19 03:30:05.823248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 he state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:16:28.415 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.415 [2024-04-19 03:30:05.823481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.415 [2024-04-19 03:30:05.823494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.415 [2024-04-19 03:30:05.823502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1he state(5) to be set 00:16:28.416 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:16:28.416 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1he state(5) to be set 00:16:28.416 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:16:28.416 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1[2024-04-19 03:30:05.823862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 he state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-19 03:30:05.823877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 he state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1he state(5) to be set 00:16:28.416 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.823908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:16:28.416 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.416 [2024-04-19 03:30:05.823980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.416 [2024-04-19 03:30:05.823987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.416 [2024-04-19 03:30:05.823994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.417 [2024-04-19 03:30:05.824000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.417 [2024-04-19 03:30:05.824015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.417 [2024-04-19 03:30:05.824030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with the state(5) to be set 00:16:28.417 [2024-04-19 03:30:05.824046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-04-19 03:30:05.824048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 he state(5) to be set 00:16:28.417 [2024-04-19 03:30:05.824061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-19 03:30:05.824062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 he state(5) to be set 00:16:28.417 [2024-04-19 03:30:05.824077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.824078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1he state(5) to be set 00:16:28.417 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fdde0 is same with t[2024-04-19 03:30:05.824093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:16:28.417 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.824984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.824999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.825012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.825026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.417 [2024-04-19 03:30:05.825039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.417 [2024-04-19 03:30:05.825054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.418 [2024-04-19 03:30:05.825066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.418 [2024-04-19 03:30:05.825080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.418 [2024-04-19 03:30:05.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.418 [2024-04-19 03:30:05.825108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.418 [2024-04-19 03:30:05.825121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.418 [2024-04-19 03:30:05.825136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.418 [2024-04-19 03:30:05.825148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.418 [2024-04-19 03:30:05.825163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.418 [2024-04-19 03:30:05.825175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.418 [2024-04-19 03:30:05.825211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:28.418 [2024-04-19 03:30:05.825292] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2521280 was disconnected and freed. reset controller. 00:16:28.418 [2024-04-19 03:30:05.825922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.825957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.825981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.826883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe290 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.418 [2024-04-19 03:30:05.829228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with t[2024-04-19 03:30:05.829686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controllehe state(5) to be set 00:16:28.419 r 00:16:28.419 [2024-04-19 03:30:05.829722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829737] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2554d80 (9): Bad file descriptor 00:16:28.419 [2024-04-19 03:30:05.829743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.829995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.830514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fe720 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.832086] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.419 [2024-04-19 03:30:05.832287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.419 [2024-04-19 03:30:05.832335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9febb0 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.832486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.419 [2024-04-19 03:30:05.832515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2554d80 with addr=10.0.0.2, port=4420 00:16:28.419 [2024-04-19 03:30:05.832532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2554d80 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.832591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6f60 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.832790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.419 [2024-04-19 03:30:05.832891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.419 [2024-04-19 03:30:05.832907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9690 is same with the state(5) to be set 00:16:28.419 [2024-04-19 03:30:05.832941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77880 (9): Bad file descriptor 00:16:28.419 [2024-04-19 03:30:05.832970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238d390 (9): Bad file descriptor 00:16:28.419 [2024-04-19 03:30:05.832998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ae060 (9): Bad file descriptor 00:16:28.419 [2024-04-19 03:30:05.833047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25531d0 is same with the state(5) to be set 00:16:28.420 [2024-04-19 03:30:05.833196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9ac0 (9): Bad file descriptor 00:16:28.420 [2024-04-19 03:30:05.833246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ccf20 is same with the state(5) to be set 00:16:28.420 [2024-04-19 03:30:05.833410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.420 [2024-04-19 03:30:05.833514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.833526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d6630 is same with the state(5) to be set 00:16:28.420 [2024-04-19 03:30:05.833601] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.420 [2024-04-19 03:30:05.833687] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.420 [2024-04-19 03:30:05.833764] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.420 [2024-04-19 03:30:05.833834] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.420 [2024-04-19 03:30:05.834050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.420 [2024-04-19 03:30:05.834552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.420 [2024-04-19 03:30:05.834565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.834975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.834987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.421 [2024-04-19 03:30:05.835492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.421 [2024-04-19 03:30:05.835505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.835900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.835987] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2385120 was disconnected and freed. reset controller. 00:16:28.422 [2024-04-19 03:30:05.836076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.422 [2024-04-19 03:30:05.836784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.422 [2024-04-19 03:30:05.836797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.836981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.836994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.837355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.837368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.849935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.423 [2024-04-19 03:30:05.849948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.423 [2024-04-19 03:30:05.850089] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2386500 was disconnected and freed. reset controller. 00:16:28.423 [2024-04-19 03:30:05.850234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2554d80 (9): Bad file descriptor 00:16:28.424 [2024-04-19 03:30:05.850311] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.424 [2024-04-19 03:30:05.850337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f6f60 (9): Bad file descriptor 00:16:28.424 [2024-04-19 03:30:05.850362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9690 (9): Bad file descriptor 00:16:28.424 [2024-04-19 03:30:05.850427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25531d0 (9): Bad file descriptor 00:16:28.424 [2024-04-19 03:30:05.850466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ccf20 (9): Bad file descriptor 00:16:28.424 [2024-04-19 03:30:05.850495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6630 (9): Bad file descriptor 00:16:28.424 [2024-04-19 03:30:05.850670] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.424 [2024-04-19 03:30:05.853033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:28.424 [2024-04-19 03:30:05.853059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:28.424 [2024-04-19 03:30:05.853077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:28.424 [2024-04-19 03:30:05.853148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.853974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.853987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.854015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.854043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.854077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.854104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.854132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.424 [2024-04-19 03:30:05.854161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.424 [2024-04-19 03:30:05.854175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.854978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.855005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(5) to be set 00:16:28.425 [2024-04-19 03:30:05.856252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.856275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.856295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.856326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.856339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.856354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.425 [2024-04-19 03:30:05.856367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.425 [2024-04-19 03:30:05.856394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.856971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.856987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.426 [2024-04-19 03:30:05.857489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.426 [2024-04-19 03:30:05.857502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.857982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.857995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.858011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.858024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.858039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.858052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.858067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.858080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.858095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.858108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.858122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25226d0 is same with the state(5) to be set 00:16:28.427 [2024-04-19 03:30:05.859370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.427 [2024-04-19 03:30:05.859864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.427 [2024-04-19 03:30:05.859880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.859892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.859907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.859920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.859935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.859947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.859962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.859975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.859989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.860975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.860988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.428 [2024-04-19 03:30:05.861002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.428 [2024-04-19 03:30:05.861015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.861221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523bc0 is same with the state(5) to be set 00:16:28.429 [2024-04-19 03:30:05.862695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.862978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.862991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.429 [2024-04-19 03:30:05.863595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.429 [2024-04-19 03:30:05.863608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.863979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.863993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.864506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.864521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.430 [2024-04-19 03:30:05.873604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.430 [2024-04-19 03:30:05.873698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.873714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.873729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249bcc0 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.875579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:28.431 [2024-04-19 03:30:05.875621] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.431 [2024-04-19 03:30:05.875636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.431 [2024-04-19 03:30:05.875747] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.431 [2024-04-19 03:30:05.875779] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.431 [2024-04-19 03:30:05.875805] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.431 [2024-04-19 03:30:05.875832] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.431 [2024-04-19 03:30:05.876031] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:28.431 [2024-04-19 03:30:05.876638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:28.431 [2024-04-19 03:30:05.876665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:28.431 [2024-04-19 03:30:05.876681] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:28.431 [2024-04-19 03:30:05.876696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:28.431 [2024-04-19 03:30:05.876965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.877125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.877150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f6f60 with addr=10.0.0.2, port=4420 00:16:28.431 [2024-04-19 03:30:05.877167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6f60 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.877310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.877446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.877471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77880 with addr=10.0.0.2, port=4420 00:16:28.431 [2024-04-19 03:30:05.877486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77880 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.878624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:28.431 [2024-04-19 03:30:05.878803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.878947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.878972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238d390 with addr=10.0.0.2, port=4420 00:16:28.431 [2024-04-19 03:30:05.878987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d390 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.879137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.879267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.879291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae060 with addr=10.0.0.2, port=4420 00:16:28.431 [2024-04-19 03:30:05.879314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ae060 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.879459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.879604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.879627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d6630 with addr=10.0.0.2, port=4420 00:16:28.431 [2024-04-19 03:30:05.879642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d6630 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.879763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.879881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.431 [2024-04-19 03:30:05.879905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9ac0 with addr=10.0.0.2, port=4420 00:16:28.431 [2024-04-19 03:30:05.879920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9ac0 is same with the state(5) to be set 00:16:28.431 [2024-04-19 03:30:05.879943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f6f60 (9): Bad file descriptor 00:16:28.431 [2024-04-19 03:30:05.879962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77880 (9): Bad file descriptor 00:16:28.431 [2024-04-19 03:30:05.880056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.431 [2024-04-19 03:30:05.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.431 [2024-04-19 03:30:05.880691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.880974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.880989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.432 [2024-04-19 03:30:05.881795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.432 [2024-04-19 03:30:05.881810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.881823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.881838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.881851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.881866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.881880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.881895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.881907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.881921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2525020 is same with the state(5) to be set 00:16:28.433 [2024-04-19 03:30:05.883188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.883976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.883991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.884004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.884026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.884054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.884067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.433 [2024-04-19 03:30:05.884082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.433 [2024-04-19 03:30:05.884095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.884982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.884995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.885010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.885023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.885038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.885051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.885066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.434 [2024-04-19 03:30:05.885079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.434 [2024-04-19 03:30:05.885093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23827c0 is same with the state(5) to be set 00:16:28.435 [2024-04-19 03:30:05.886356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.886986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.886999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.435 [2024-04-19 03:30:05.887409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.435 [2024-04-19 03:30:05.887429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.887968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.887985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.436 [2024-04-19 03:30:05.888244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.436 [2024-04-19 03:30:05.888258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2383c70 is same with the state(5) to be set 00:16:28.436 [2024-04-19 03:30:05.890614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:28.436 [2024-04-19 03:30:05.890647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:28.436 task offset: 24576 on job bdev=Nvme2n1 fails 00:16:28.436 00:16:28.436 Latency(us) 00:16:28.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.436 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme1n1 ended in about 0.89 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme1n1 : 0.89 147.81 9.24 71.66 0.00 288320.26 19709.35 257872.02 00:16:28.436 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme2n1 ended in about 0.87 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme2n1 : 0.87 221.84 13.86 73.95 0.00 209258.00 21845.33 243891.01 00:16:28.436 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme3n1 ended in about 0.90 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme3n1 : 0.90 142.83 8.93 71.41 0.00 283293.52 18835.53 248551.35 00:16:28.436 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme4n1 ended in about 0.90 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme4n1 : 0.90 217.95 13.62 71.17 0.00 205442.85 16990.81 250104.79 00:16:28.436 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme5n1 ended in about 0.92 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme5n1 : 0.92 143.49 8.97 69.57 0.00 273419.36 20971.52 264085.81 00:16:28.436 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme6n1 ended in about 0.92 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme6n1 : 0.92 138.66 8.67 69.33 0.00 274157.42 19126.80 293601.28 00:16:28.436 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme7n1 ended in about 0.93 seconds with error 00:16:28.436 Verification LBA range: start 0x0 length 0x400 00:16:28.436 Nvme7n1 : 0.93 138.19 8.64 69.10 0.00 269225.78 22816.24 257872.02 00:16:28.436 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.436 Job: Nvme8n1 ended in about 0.89 seconds with error 00:16:28.437 Verification LBA range: start 0x0 length 0x400 00:16:28.437 Nvme8n1 : 0.89 216.01 13.50 72.00 0.00 188060.44 19126.80 250104.79 00:16:28.437 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.437 Job: Nvme9n1 ended in about 0.89 seconds with error 00:16:28.437 Verification LBA range: start 0x0 length 0x400 00:16:28.437 Nvme9n1 : 0.89 143.83 8.99 71.92 0.00 245487.19 20097.71 293601.28 00:16:28.437 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.437 Job: Nvme10n1 ended in about 0.91 seconds with error 00:16:28.437 Verification LBA range: start 0x0 length 0x400 00:16:28.437 Nvme10n1 : 0.91 140.38 8.77 70.19 0.00 247058.33 20971.52 257872.02 00:16:28.437 =================================================================================================================== 00:16:28.437 Total : 1651.00 103.19 710.30 0.00 244124.43 16990.81 293601.28 00:16:28.437 [2024-04-19 03:30:05.920038] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:28.437 [2024-04-19 03:30:05.920115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:28.437 [2024-04-19 03:30:05.920464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.920634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.920661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2554d80 with addr=10.0.0.2, port=4420 00:16:28.437 [2024-04-19 03:30:05.920680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2554d80 is same with the state(5) to be set 00:16:28.437 [2024-04-19 03:30:05.920706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238d390 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.920728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ae060 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.920745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6630 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.920762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9ac0 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.920791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.920804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.920820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:28.437 [2024-04-19 03:30:05.920845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.920859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.920872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.437 [2024-04-19 03:30:05.920944] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.920970] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.920988] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.921007] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.921024] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.921042] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.921061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2554d80 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.921514] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.921539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.921704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.921844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.921869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25531d0 with addr=10.0.0.2, port=4420 00:16:28.437 [2024-04-19 03:30:05.921884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25531d0 is same with the state(5) to be set 00:16:28.437 [2024-04-19 03:30:05.922006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.922145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.922169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9690 with addr=10.0.0.2, port=4420 00:16:28.437 [2024-04-19 03:30:05.922185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9690 is same with the state(5) to be set 00:16:28.437 [2024-04-19 03:30:05.922320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.922457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.437 [2024-04-19 03:30:05.922483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ccf20 with addr=10.0.0.2, port=4420 00:16:28.437 [2024-04-19 03:30:05.922499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ccf20 is same with the state(5) to be set 00:16:28.437 [2024-04-19 03:30:05.922515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.922527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.922539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:28.437 [2024-04-19 03:30:05.922557] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.922575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.922588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:28.437 [2024-04-19 03:30:05.922604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.922617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.922629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:28.437 [2024-04-19 03:30:05.922646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.922659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.922672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:28.437 [2024-04-19 03:30:05.922717] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.922740] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.922758] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.922774] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.922791] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.437 [2024-04-19 03:30:05.923670] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.923694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.923707] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.923718] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.923742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25531d0 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.923763] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9690 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.923780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ccf20 (9): Bad file descriptor 00:16:28.437 [2024-04-19 03:30:05.923795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.923807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.923820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:28.437 [2024-04-19 03:30:05.923892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.437 [2024-04-19 03:30:05.923916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:28.437 [2024-04-19 03:30:05.923933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.437 [2024-04-19 03:30:05.923960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.923975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.923988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:28.437 [2024-04-19 03:30:05.924006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:28.437 [2024-04-19 03:30:05.924026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:28.437 [2024-04-19 03:30:05.924038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:28.437 [2024-04-19 03:30:05.924053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:16:28.438 [2024-04-19 03:30:05.924066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:16:28.438 [2024-04-19 03:30:05.924078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:28.438 [2024-04-19 03:30:05.924139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.438 [2024-04-19 03:30:05.924158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.438 [2024-04-19 03:30:05.924170] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.438 [2024-04-19 03:30:05.924312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.438 [2024-04-19 03:30:05.924473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.438 [2024-04-19 03:30:05.924498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77880 with addr=10.0.0.2, port=4420 00:16:28.438 [2024-04-19 03:30:05.924513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77880 is same with the state(5) to be set 00:16:28.438 [2024-04-19 03:30:05.924644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.438 [2024-04-19 03:30:05.924769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.438 [2024-04-19 03:30:05.924792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f6f60 with addr=10.0.0.2, port=4420 00:16:28.438 [2024-04-19 03:30:05.924806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6f60 is same with the state(5) to be set 00:16:28.438 [2024-04-19 03:30:05.924850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77880 (9): Bad file descriptor 00:16:28.438 [2024-04-19 03:30:05.924874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f6f60 (9): Bad file descriptor 00:16:28.438 [2024-04-19 03:30:05.924914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.438 [2024-04-19 03:30:05.924931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:28.438 [2024-04-19 03:30:05.924944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.438 [2024-04-19 03:30:05.924960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:16:28.438 [2024-04-19 03:30:05.924972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:16:28.438 [2024-04-19 03:30:05.924984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:28.438 [2024-04-19 03:30:05.925019] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.438 [2024-04-19 03:30:05.925036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:29.015 03:30:06 -- target/shutdown.sh@136 -- # nvmfpid= 00:16:29.015 03:30:06 -- target/shutdown.sh@139 -- # sleep 1 00:16:29.949 03:30:07 -- target/shutdown.sh@142 -- # kill -9 274466 00:16:29.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (274466) - No such process 00:16:29.949 03:30:07 -- target/shutdown.sh@142 -- # true 00:16:29.949 03:30:07 -- target/shutdown.sh@144 -- # stoptarget 00:16:29.949 03:30:07 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:29.949 03:30:07 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:29.949 03:30:07 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:29.949 03:30:07 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:29.949 03:30:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:29.949 03:30:07 -- nvmf/common.sh@117 -- # sync 00:16:29.949 03:30:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.949 03:30:07 -- nvmf/common.sh@120 -- # set +e 00:16:29.949 03:30:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.949 03:30:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.949 rmmod nvme_tcp 00:16:29.949 rmmod nvme_fabrics 00:16:29.949 rmmod nvme_keyring 00:16:29.949 03:30:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.949 03:30:07 -- nvmf/common.sh@124 -- # set -e 00:16:29.949 03:30:07 -- nvmf/common.sh@125 -- # return 0 00:16:29.949 03:30:07 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:29.949 03:30:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:29.949 03:30:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:29.949 03:30:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:29.949 03:30:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.949 03:30:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.949 03:30:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.949 03:30:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.949 03:30:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.483 03:30:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:32.483 00:16:32.483 real 0m7.215s 00:16:32.483 user 0m16.609s 00:16:32.483 sys 0m1.481s 00:16:32.483 03:30:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.483 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.483 ************************************ 00:16:32.483 END TEST nvmf_shutdown_tc3 00:16:32.483 ************************************ 00:16:32.483 03:30:09 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:16:32.483 00:16:32.483 real 0m28.299s 00:16:32.483 user 1m18.102s 00:16:32.483 sys 0m6.775s 00:16:32.483 03:30:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.483 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.483 ************************************ 00:16:32.483 END TEST nvmf_shutdown 00:16:32.483 ************************************ 00:16:32.483 03:30:09 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:32.483 03:30:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:32.483 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.483 03:30:09 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:32.483 03:30:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:32.483 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.483 03:30:09 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:32.483 03:30:09 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:32.483 03:30:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:32.483 03:30:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.483 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.483 ************************************ 00:16:32.483 START TEST nvmf_multicontroller 00:16:32.483 ************************************ 00:16:32.483 03:30:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:32.483 * Looking for test storage... 00:16:32.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:32.483 03:30:09 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.483 03:30:09 -- nvmf/common.sh@7 -- # uname -s 00:16:32.483 03:30:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.483 03:30:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.483 03:30:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.483 03:30:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.483 03:30:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.483 03:30:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.483 03:30:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.483 03:30:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.483 03:30:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.483 03:30:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.483 03:30:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.483 03:30:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.483 03:30:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.483 03:30:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.483 03:30:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.483 03:30:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.484 03:30:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.484 03:30:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.484 03:30:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.484 03:30:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.484 03:30:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.484 03:30:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.484 03:30:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.484 03:30:09 -- paths/export.sh@5 -- # export PATH 00:16:32.484 03:30:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.484 03:30:09 -- nvmf/common.sh@47 -- # : 0 00:16:32.484 03:30:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.484 03:30:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.484 03:30:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.484 03:30:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.484 03:30:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.484 03:30:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.484 03:30:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.484 03:30:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.484 03:30:09 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.484 03:30:09 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.484 03:30:09 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:32.484 03:30:09 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:32.484 03:30:09 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.484 03:30:09 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:32.484 03:30:09 -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:32.484 03:30:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:32.484 03:30:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.484 03:30:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:32.484 03:30:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:32.484 03:30:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:32.484 03:30:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.484 03:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.484 03:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.484 03:30:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:32.484 03:30:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:32.484 03:30:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.484 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:34.387 03:30:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.387 03:30:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.387 03:30:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.387 03:30:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.387 03:30:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.387 03:30:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.387 03:30:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.387 03:30:11 -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.387 03:30:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.387 03:30:11 -- nvmf/common.sh@296 -- # e810=() 00:16:34.387 03:30:11 -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.387 03:30:11 -- nvmf/common.sh@297 -- # x722=() 00:16:34.387 03:30:11 -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.387 03:30:11 -- nvmf/common.sh@298 -- # mlx=() 00:16:34.387 03:30:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.387 03:30:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.387 03:30:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.387 03:30:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.387 03:30:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.387 03:30:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.387 03:30:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:34.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:34.387 03:30:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.387 03:30:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:34.387 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:34.387 03:30:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.387 03:30:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.387 03:30:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.387 03:30:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.387 03:30:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.387 03:30:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:34.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:34.387 03:30:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.387 03:30:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.387 03:30:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.387 03:30:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.387 03:30:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.387 03:30:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:34.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:34.387 03:30:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.387 03:30:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:34.387 03:30:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:34.387 03:30:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:34.387 03:30:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:34.387 03:30:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.388 03:30:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.388 03:30:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.388 03:30:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.388 03:30:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.388 03:30:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.388 03:30:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.388 03:30:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.388 03:30:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.388 03:30:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.388 03:30:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.388 03:30:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.388 03:30:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.388 03:30:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.388 03:30:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.388 03:30:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.388 03:30:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.388 03:30:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.388 03:30:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.388 03:30:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:16:34.388 00:16:34.388 --- 10.0.0.2 ping statistics --- 00:16:34.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.388 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:34.388 03:30:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:16:34.388 00:16:34.388 --- 10.0.0.1 ping statistics --- 00:16:34.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.388 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:34.388 03:30:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.388 03:30:11 -- nvmf/common.sh@411 -- # return 0 00:16:34.388 03:30:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:34.388 03:30:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.388 03:30:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:34.388 03:30:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:34.388 03:30:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.388 03:30:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:34.388 03:30:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:34.388 03:30:11 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:34.388 03:30:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:34.388 03:30:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:34.388 03:30:11 -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 03:30:11 -- nvmf/common.sh@470 -- # nvmfpid=277493 00:16:34.388 03:30:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:34.388 03:30:11 -- nvmf/common.sh@471 -- # waitforlisten 277493 00:16:34.388 03:30:11 -- common/autotest_common.sh@817 -- # '[' -z 277493 ']' 00:16:34.388 03:30:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.388 03:30:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.388 03:30:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.388 03:30:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.388 03:30:11 -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 [2024-04-19 03:30:11.822509] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:34.388 [2024-04-19 03:30:11.822603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.388 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.388 [2024-04-19 03:30:11.891616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.647 [2024-04-19 03:30:12.011297] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.647 [2024-04-19 03:30:12.011370] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.647 [2024-04-19 03:30:12.011398] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.647 [2024-04-19 03:30:12.011445] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.647 [2024-04-19 03:30:12.011456] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.647 [2024-04-19 03:30:12.011512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.647 [2024-04-19 03:30:12.011573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.647 [2024-04-19 03:30:12.011576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.647 03:30:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.647 03:30:12 -- common/autotest_common.sh@850 -- # return 0 00:16:34.647 03:30:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:34.647 03:30:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.647 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.647 03:30:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.647 03:30:12 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.647 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.647 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.647 [2024-04-19 03:30:12.158438] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.647 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.647 03:30:12 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:34.647 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.647 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 Malloc0 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 [2024-04-19 03:30:12.226819] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 [2024-04-19 03:30:12.234708] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 Malloc1 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:34.906 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.906 03:30:12 -- host/multicontroller.sh@44 -- # bdevperf_pid=277525 00:16:34.906 03:30:12 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:34.906 03:30:12 -- host/multicontroller.sh@47 -- # waitforlisten 277525 /var/tmp/bdevperf.sock 00:16:34.906 03:30:12 -- common/autotest_common.sh@817 -- # '[' -z 277525 ']' 00:16:34.906 03:30:12 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:34.906 03:30:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.906 03:30:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.906 03:30:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.906 03:30:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.906 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 03:30:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.164 03:30:12 -- common/autotest_common.sh@850 -- # return 0 00:16:35.164 03:30:12 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:35.164 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.164 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.422 NVMe0n1 00:16:35.422 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.422 03:30:12 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:35.422 03:30:12 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:35.422 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.422 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.422 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.422 1 00:16:35.422 03:30:12 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:35.422 03:30:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:35.422 03:30:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:35.422 03:30:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:35.422 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.422 03:30:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:35.422 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.422 03:30:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:35.422 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.422 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.422 request: 00:16:35.422 { 00:16:35.422 "name": "NVMe0", 00:16:35.422 "trtype": "tcp", 00:16:35.422 "traddr": "10.0.0.2", 00:16:35.422 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:35.422 "hostaddr": "10.0.0.2", 00:16:35.422 "hostsvcid": "60000", 00:16:35.422 "adrfam": "ipv4", 00:16:35.422 "trsvcid": "4420", 00:16:35.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.422 "method": "bdev_nvme_attach_controller", 00:16:35.422 "req_id": 1 00:16:35.422 } 00:16:35.422 Got JSON-RPC error response 00:16:35.422 response: 00:16:35.422 { 00:16:35.422 "code": -114, 00:16:35.422 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:35.422 } 00:16:35.422 03:30:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:35.422 03:30:12 -- common/autotest_common.sh@641 -- # es=1 00:16:35.422 03:30:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:35.422 03:30:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:35.422 03:30:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:35.422 03:30:12 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:35.422 03:30:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:35.422 03:30:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:35.422 03:30:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:35.422 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.422 03:30:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:35.422 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.422 03:30:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:35.422 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.422 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.422 request: 00:16:35.422 { 00:16:35.423 "name": "NVMe0", 00:16:35.423 "trtype": "tcp", 00:16:35.423 "traddr": "10.0.0.2", 00:16:35.423 "hostaddr": "10.0.0.2", 00:16:35.423 "hostsvcid": "60000", 00:16:35.423 "adrfam": "ipv4", 00:16:35.423 "trsvcid": "4420", 00:16:35.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:35.423 "method": "bdev_nvme_attach_controller", 00:16:35.423 "req_id": 1 00:16:35.423 } 00:16:35.423 Got JSON-RPC error response 00:16:35.423 response: 00:16:35.423 { 00:16:35.423 "code": -114, 00:16:35.423 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:35.423 } 00:16:35.423 03:30:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:35.423 03:30:12 -- common/autotest_common.sh@641 -- # es=1 00:16:35.423 03:30:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:35.423 03:30:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:35.423 03:30:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:35.423 03:30:12 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:35.423 03:30:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:35.423 03:30:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:35.423 03:30:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:35.423 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.423 03:30:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:35.423 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.423 03:30:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:35.423 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.423 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.423 request: 00:16:35.423 { 00:16:35.423 "name": "NVMe0", 00:16:35.423 "trtype": "tcp", 00:16:35.423 "traddr": "10.0.0.2", 00:16:35.423 "hostaddr": "10.0.0.2", 00:16:35.423 "hostsvcid": "60000", 00:16:35.423 "adrfam": "ipv4", 00:16:35.423 "trsvcid": "4420", 00:16:35.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.423 "multipath": "disable", 00:16:35.423 "method": "bdev_nvme_attach_controller", 00:16:35.423 "req_id": 1 00:16:35.423 } 00:16:35.423 Got JSON-RPC error response 00:16:35.423 response: 00:16:35.423 { 00:16:35.423 "code": -114, 00:16:35.423 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:16:35.423 } 00:16:35.423 03:30:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:35.423 03:30:12 -- common/autotest_common.sh@641 -- # es=1 00:16:35.423 03:30:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:35.423 03:30:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:35.423 03:30:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:35.423 03:30:12 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:35.423 03:30:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:35.423 03:30:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:35.423 03:30:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:35.423 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.423 03:30:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:35.423 03:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.423 03:30:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:35.423 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.423 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.423 request: 00:16:35.423 { 00:16:35.423 "name": "NVMe0", 00:16:35.423 "trtype": "tcp", 00:16:35.423 "traddr": "10.0.0.2", 00:16:35.423 "hostaddr": "10.0.0.2", 00:16:35.423 "hostsvcid": "60000", 00:16:35.423 "adrfam": "ipv4", 00:16:35.423 "trsvcid": "4420", 00:16:35.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.423 "multipath": "failover", 00:16:35.423 "method": "bdev_nvme_attach_controller", 00:16:35.423 "req_id": 1 00:16:35.423 } 00:16:35.423 Got JSON-RPC error response 00:16:35.423 response: 00:16:35.423 { 00:16:35.423 "code": -114, 00:16:35.423 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:35.423 } 00:16:35.423 03:30:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:35.423 03:30:12 -- common/autotest_common.sh@641 -- # es=1 00:16:35.423 03:30:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:35.423 03:30:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:35.423 03:30:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:35.423 03:30:12 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:35.423 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.423 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.681 00:16:35.681 03:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.681 03:30:12 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:35.681 03:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.681 03:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.681 03:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.681 03:30:13 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:35.681 03:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.681 03:30:13 -- common/autotest_common.sh@10 -- # set +x 00:16:35.681 00:16:35.681 03:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.681 03:30:13 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:35.681 03:30:13 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:35.681 03:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.681 03:30:13 -- common/autotest_common.sh@10 -- # set +x 00:16:35.681 03:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.681 03:30:13 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:35.681 03:30:13 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:37.054 0 00:16:37.054 03:30:14 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:37.055 03:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.055 03:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:37.055 03:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.055 03:30:14 -- host/multicontroller.sh@100 -- # killprocess 277525 00:16:37.055 03:30:14 -- common/autotest_common.sh@936 -- # '[' -z 277525 ']' 00:16:37.055 03:30:14 -- common/autotest_common.sh@940 -- # kill -0 277525 00:16:37.055 03:30:14 -- common/autotest_common.sh@941 -- # uname 00:16:37.055 03:30:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.055 03:30:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 277525 00:16:37.055 03:30:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.055 03:30:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.055 03:30:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 277525' 00:16:37.055 killing process with pid 277525 00:16:37.055 03:30:14 -- common/autotest_common.sh@955 -- # kill 277525 00:16:37.055 03:30:14 -- common/autotest_common.sh@960 -- # wait 277525 00:16:37.055 03:30:14 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.055 03:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.055 03:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:37.055 03:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.055 03:30:14 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:37.055 03:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.055 03:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:37.055 03:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.055 03:30:14 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:37.055 03:30:14 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:37.055 03:30:14 -- common/autotest_common.sh@1598 -- # read -r file 00:16:37.055 03:30:14 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:16:37.055 03:30:14 -- common/autotest_common.sh@1597 -- # sort -u 00:16:37.055 03:30:14 -- common/autotest_common.sh@1599 -- # cat 00:16:37.055 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:37.055 [2024-04-19 03:30:12.338612] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:37.055 [2024-04-19 03:30:12.338702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277525 ] 00:16:37.055 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.055 [2024-04-19 03:30:12.397040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.055 [2024-04-19 03:30:12.506133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.055 [2024-04-19 03:30:13.105842] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 18939fc8-5f8b-4cce-a21b-89488bfa91ec already exists 00:16:37.055 [2024-04-19 03:30:13.105879] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:18939fc8-5f8b-4cce-a21b-89488bfa91ec alias for bdev NVMe1n1 00:16:37.055 [2024-04-19 03:30:13.105913] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:37.055 Running I/O for 1 seconds... 00:16:37.055 00:16:37.055 Latency(us) 00:16:37.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.055 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:37.055 NVMe0n1 : 1.01 18346.35 71.67 0.00 0.00 6965.80 2135.99 12136.30 00:16:37.055 =================================================================================================================== 00:16:37.055 Total : 18346.35 71.67 0.00 0.00 6965.80 2135.99 12136.30 00:16:37.055 Received shutdown signal, test time was about 1.000000 seconds 00:16:37.055 00:16:37.055 Latency(us) 00:16:37.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.055 =================================================================================================================== 00:16:37.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.055 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:37.055 03:30:14 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:37.055 03:30:14 -- common/autotest_common.sh@1598 -- # read -r file 00:16:37.055 03:30:14 -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:37.055 03:30:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:37.055 03:30:14 -- nvmf/common.sh@117 -- # sync 00:16:37.055 03:30:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.055 03:30:14 -- nvmf/common.sh@120 -- # set +e 00:16:37.055 03:30:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.055 03:30:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.055 rmmod nvme_tcp 00:16:37.055 rmmod nvme_fabrics 00:16:37.055 rmmod nvme_keyring 00:16:37.055 03:30:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.055 03:30:14 -- nvmf/common.sh@124 -- # set -e 00:16:37.055 03:30:14 -- nvmf/common.sh@125 -- # return 0 00:16:37.055 03:30:14 -- nvmf/common.sh@478 -- # '[' -n 277493 ']' 00:16:37.055 03:30:14 -- nvmf/common.sh@479 -- # killprocess 277493 00:16:37.055 03:30:14 -- common/autotest_common.sh@936 -- # '[' -z 277493 ']' 00:16:37.055 03:30:14 -- common/autotest_common.sh@940 -- # kill -0 277493 00:16:37.055 03:30:14 -- common/autotest_common.sh@941 -- # uname 00:16:37.055 03:30:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.055 03:30:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 277493 00:16:37.313 03:30:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:37.313 03:30:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:37.313 03:30:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 277493' 00:16:37.313 killing process with pid 277493 00:16:37.313 03:30:14 -- common/autotest_common.sh@955 -- # kill 277493 00:16:37.313 03:30:14 -- common/autotest_common.sh@960 -- # wait 277493 00:16:37.573 03:30:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:37.573 03:30:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:37.573 03:30:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:37.573 03:30:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.573 03:30:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.573 03:30:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.573 03:30:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.573 03:30:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.474 03:30:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.474 00:16:39.474 real 0m7.318s 00:16:39.474 user 0m11.155s 00:16:39.474 sys 0m2.369s 00:16:39.474 03:30:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.474 03:30:16 -- common/autotest_common.sh@10 -- # set +x 00:16:39.474 ************************************ 00:16:39.474 END TEST nvmf_multicontroller 00:16:39.474 ************************************ 00:16:39.474 03:30:17 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:39.474 03:30:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:39.474 03:30:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.474 03:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:39.732 ************************************ 00:16:39.732 START TEST nvmf_aer 00:16:39.732 ************************************ 00:16:39.732 03:30:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:39.732 * Looking for test storage... 00:16:39.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:39.732 03:30:17 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.733 03:30:17 -- nvmf/common.sh@7 -- # uname -s 00:16:39.733 03:30:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.733 03:30:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.733 03:30:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.733 03:30:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.733 03:30:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.733 03:30:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.733 03:30:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.733 03:30:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.733 03:30:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.733 03:30:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.733 03:30:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.733 03:30:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.733 03:30:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.733 03:30:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.733 03:30:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.733 03:30:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.733 03:30:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.733 03:30:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.733 03:30:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.733 03:30:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.733 03:30:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.733 03:30:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.733 03:30:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.733 03:30:17 -- paths/export.sh@5 -- # export PATH 00:16:39.733 03:30:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.733 03:30:17 -- nvmf/common.sh@47 -- # : 0 00:16:39.733 03:30:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.733 03:30:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.733 03:30:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.733 03:30:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.733 03:30:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.733 03:30:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.733 03:30:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.733 03:30:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.733 03:30:17 -- host/aer.sh@11 -- # nvmftestinit 00:16:39.733 03:30:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:39.733 03:30:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.733 03:30:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:39.733 03:30:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:39.733 03:30:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:39.733 03:30:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.733 03:30:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.733 03:30:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.733 03:30:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:39.733 03:30:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:39.733 03:30:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.733 03:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:41.635 03:30:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:41.635 03:30:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.635 03:30:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.635 03:30:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.635 03:30:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.635 03:30:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.635 03:30:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.635 03:30:19 -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.635 03:30:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.635 03:30:19 -- nvmf/common.sh@296 -- # e810=() 00:16:41.635 03:30:19 -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.635 03:30:19 -- nvmf/common.sh@297 -- # x722=() 00:16:41.635 03:30:19 -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.635 03:30:19 -- nvmf/common.sh@298 -- # mlx=() 00:16:41.635 03:30:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.635 03:30:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.635 03:30:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.635 03:30:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.635 03:30:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.635 03:30:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.635 03:30:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.635 03:30:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.635 03:30:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.635 03:30:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.635 03:30:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.635 03:30:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.635 03:30:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.635 03:30:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.635 03:30:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.635 03:30:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.635 03:30:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.635 03:30:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.635 03:30:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.635 03:30:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.635 03:30:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.635 03:30:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.635 03:30:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:41.635 03:30:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:41.635 03:30:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:41.635 03:30:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:41.635 03:30:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.635 03:30:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.635 03:30:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.635 03:30:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.635 03:30:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.635 03:30:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.635 03:30:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.635 03:30:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.635 03:30:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.635 03:30:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.635 03:30:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.635 03:30:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.635 03:30:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.635 03:30:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.635 03:30:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.635 03:30:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.635 03:30:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.894 03:30:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.894 03:30:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.894 03:30:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:16:41.894 00:16:41.894 --- 10.0.0.2 ping statistics --- 00:16:41.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.894 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:16:41.894 03:30:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:16:41.894 00:16:41.894 --- 10.0.0.1 ping statistics --- 00:16:41.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.894 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:41.894 03:30:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.894 03:30:19 -- nvmf/common.sh@411 -- # return 0 00:16:41.894 03:30:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:41.894 03:30:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.894 03:30:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:41.894 03:30:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:41.894 03:30:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.894 03:30:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:41.894 03:30:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:41.894 03:30:19 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:41.894 03:30:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:41.894 03:30:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:41.894 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:41.894 03:30:19 -- nvmf/common.sh@470 -- # nvmfpid=279745 00:16:41.894 03:30:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.894 03:30:19 -- nvmf/common.sh@471 -- # waitforlisten 279745 00:16:41.894 03:30:19 -- common/autotest_common.sh@817 -- # '[' -z 279745 ']' 00:16:41.894 03:30:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.894 03:30:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.894 03:30:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.894 03:30:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.894 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:41.894 [2024-04-19 03:30:19.313705] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:41.894 [2024-04-19 03:30:19.313788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.894 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.894 [2024-04-19 03:30:19.378577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.152 [2024-04-19 03:30:19.485860] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.152 [2024-04-19 03:30:19.485910] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.152 [2024-04-19 03:30:19.485938] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.152 [2024-04-19 03:30:19.485951] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.152 [2024-04-19 03:30:19.485961] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.152 [2024-04-19 03:30:19.486051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.152 [2024-04-19 03:30:19.486111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.152 [2024-04-19 03:30:19.486187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.152 [2024-04-19 03:30:19.486190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.152 03:30:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.152 03:30:19 -- common/autotest_common.sh@850 -- # return 0 00:16:42.152 03:30:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:42.152 03:30:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:42.152 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.152 03:30:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.152 03:30:19 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.152 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.153 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.153 [2024-04-19 03:30:19.639152] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.153 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.153 03:30:19 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:42.153 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.153 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.153 Malloc0 00:16:42.153 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.153 03:30:19 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:42.153 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.153 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.153 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.153 03:30:19 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.153 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.153 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.153 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.153 03:30:19 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.153 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.153 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.153 [2024-04-19 03:30:19.692037] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.153 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.153 03:30:19 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:42.153 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.153 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.153 [2024-04-19 03:30:19.699768] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:42.153 [ 00:16:42.153 { 00:16:42.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:42.153 "subtype": "Discovery", 00:16:42.153 "listen_addresses": [], 00:16:42.153 "allow_any_host": true, 00:16:42.153 "hosts": [] 00:16:42.153 }, 00:16:42.153 { 00:16:42.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.153 "subtype": "NVMe", 00:16:42.153 "listen_addresses": [ 00:16:42.153 { 00:16:42.153 "transport": "TCP", 00:16:42.153 "trtype": "TCP", 00:16:42.153 "adrfam": "IPv4", 00:16:42.153 "traddr": "10.0.0.2", 00:16:42.153 "trsvcid": "4420" 00:16:42.153 } 00:16:42.153 ], 00:16:42.153 "allow_any_host": true, 00:16:42.153 "hosts": [], 00:16:42.153 "serial_number": "SPDK00000000000001", 00:16:42.153 "model_number": "SPDK bdev Controller", 00:16:42.153 "max_namespaces": 2, 00:16:42.153 "min_cntlid": 1, 00:16:42.153 "max_cntlid": 65519, 00:16:42.153 "namespaces": [ 00:16:42.153 { 00:16:42.153 "nsid": 1, 00:16:42.153 "bdev_name": "Malloc0", 00:16:42.153 "name": "Malloc0", 00:16:42.153 "nguid": "9714CC59AB2D4BFC97FD188313E6956C", 00:16:42.153 "uuid": "9714cc59-ab2d-4bfc-97fd-188313e6956c" 00:16:42.153 } 00:16:42.153 ] 00:16:42.153 } 00:16:42.153 ] 00:16:42.153 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.153 03:30:19 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:42.153 03:30:19 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:42.153 03:30:19 -- host/aer.sh@33 -- # aerpid=279883 00:16:42.153 03:30:19 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:42.153 03:30:19 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:42.153 03:30:19 -- common/autotest_common.sh@1251 -- # local i=0 00:16:42.411 03:30:19 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:42.411 03:30:19 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:16:42.411 03:30:19 -- common/autotest_common.sh@1254 -- # i=1 00:16:42.411 03:30:19 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:42.411 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.411 03:30:19 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:42.411 03:30:19 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:16:42.411 03:30:19 -- common/autotest_common.sh@1254 -- # i=2 00:16:42.411 03:30:19 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:42.411 03:30:19 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:42.411 03:30:19 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:42.411 03:30:19 -- common/autotest_common.sh@1262 -- # return 0 00:16:42.411 03:30:19 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:42.411 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.411 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.411 Malloc1 00:16:42.411 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.411 03:30:19 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:42.411 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.411 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.411 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.411 03:30:19 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:42.411 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.411 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.669 Asynchronous Event Request test 00:16:42.669 Attaching to 10.0.0.2 00:16:42.669 Attached to 10.0.0.2 00:16:42.669 Registering asynchronous event callbacks... 00:16:42.669 Starting namespace attribute notice tests for all controllers... 00:16:42.669 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:42.669 aer_cb - Changed Namespace 00:16:42.669 Cleaning up... 00:16:42.669 [ 00:16:42.669 { 00:16:42.669 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:42.669 "subtype": "Discovery", 00:16:42.669 "listen_addresses": [], 00:16:42.669 "allow_any_host": true, 00:16:42.669 "hosts": [] 00:16:42.669 }, 00:16:42.669 { 00:16:42.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.669 "subtype": "NVMe", 00:16:42.669 "listen_addresses": [ 00:16:42.669 { 00:16:42.669 "transport": "TCP", 00:16:42.669 "trtype": "TCP", 00:16:42.669 "adrfam": "IPv4", 00:16:42.669 "traddr": "10.0.0.2", 00:16:42.669 "trsvcid": "4420" 00:16:42.669 } 00:16:42.669 ], 00:16:42.669 "allow_any_host": true, 00:16:42.669 "hosts": [], 00:16:42.669 "serial_number": "SPDK00000000000001", 00:16:42.669 "model_number": "SPDK bdev Controller", 00:16:42.669 "max_namespaces": 2, 00:16:42.669 "min_cntlid": 1, 00:16:42.669 "max_cntlid": 65519, 00:16:42.669 "namespaces": [ 00:16:42.669 { 00:16:42.669 "nsid": 1, 00:16:42.669 "bdev_name": "Malloc0", 00:16:42.669 "name": "Malloc0", 00:16:42.669 "nguid": "9714CC59AB2D4BFC97FD188313E6956C", 00:16:42.669 "uuid": "9714cc59-ab2d-4bfc-97fd-188313e6956c" 00:16:42.669 }, 00:16:42.669 { 00:16:42.669 "nsid": 2, 00:16:42.669 "bdev_name": "Malloc1", 00:16:42.669 "name": "Malloc1", 00:16:42.669 "nguid": "6C19A5F633C442458AA9C35DEE7C3938", 00:16:42.670 "uuid": "6c19a5f6-33c4-4245-8aa9-c35dee7c3938" 00:16:42.670 } 00:16:42.670 ] 00:16:42.670 } 00:16:42.670 ] 00:16:42.670 03:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.670 03:30:19 -- host/aer.sh@43 -- # wait 279883 00:16:42.670 03:30:19 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:42.670 03:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.670 03:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.670 03:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.670 03:30:20 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:42.670 03:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.670 03:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:42.670 03:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.670 03:30:20 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.670 03:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.670 03:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:42.670 03:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.670 03:30:20 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:42.670 03:30:20 -- host/aer.sh@51 -- # nvmftestfini 00:16:42.670 03:30:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:42.670 03:30:20 -- nvmf/common.sh@117 -- # sync 00:16:42.670 03:30:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.670 03:30:20 -- nvmf/common.sh@120 -- # set +e 00:16:42.670 03:30:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.670 03:30:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.670 rmmod nvme_tcp 00:16:42.670 rmmod nvme_fabrics 00:16:42.670 rmmod nvme_keyring 00:16:42.670 03:30:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.670 03:30:20 -- nvmf/common.sh@124 -- # set -e 00:16:42.670 03:30:20 -- nvmf/common.sh@125 -- # return 0 00:16:42.670 03:30:20 -- nvmf/common.sh@478 -- # '[' -n 279745 ']' 00:16:42.670 03:30:20 -- nvmf/common.sh@479 -- # killprocess 279745 00:16:42.670 03:30:20 -- common/autotest_common.sh@936 -- # '[' -z 279745 ']' 00:16:42.670 03:30:20 -- common/autotest_common.sh@940 -- # kill -0 279745 00:16:42.670 03:30:20 -- common/autotest_common.sh@941 -- # uname 00:16:42.670 03:30:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.670 03:30:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 279745 00:16:42.670 03:30:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.670 03:30:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.670 03:30:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 279745' 00:16:42.670 killing process with pid 279745 00:16:42.670 03:30:20 -- common/autotest_common.sh@955 -- # kill 279745 00:16:42.670 [2024-04-19 03:30:20.129569] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:42.670 03:30:20 -- common/autotest_common.sh@960 -- # wait 279745 00:16:42.929 03:30:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:42.929 03:30:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:42.929 03:30:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:42.929 03:30:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.929 03:30:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.929 03:30:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.929 03:30:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.929 03:30:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.463 03:30:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.463 00:16:45.463 real 0m5.352s 00:16:45.463 user 0m4.160s 00:16:45.463 sys 0m1.880s 00:16:45.463 03:30:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:45.463 03:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:45.463 ************************************ 00:16:45.463 END TEST nvmf_aer 00:16:45.463 ************************************ 00:16:45.463 03:30:22 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:45.463 03:30:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:45.463 03:30:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.463 03:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:45.463 ************************************ 00:16:45.463 START TEST nvmf_async_init 00:16:45.463 ************************************ 00:16:45.463 03:30:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:45.463 * Looking for test storage... 00:16:45.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:45.463 03:30:22 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.463 03:30:22 -- nvmf/common.sh@7 -- # uname -s 00:16:45.463 03:30:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.463 03:30:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.463 03:30:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.463 03:30:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.463 03:30:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.463 03:30:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.463 03:30:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.463 03:30:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.463 03:30:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.463 03:30:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.463 03:30:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.463 03:30:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.463 03:30:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.463 03:30:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.463 03:30:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.463 03:30:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.463 03:30:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.463 03:30:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.463 03:30:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.463 03:30:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.463 03:30:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.463 03:30:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.463 03:30:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.463 03:30:22 -- paths/export.sh@5 -- # export PATH 00:16:45.463 03:30:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.463 03:30:22 -- nvmf/common.sh@47 -- # : 0 00:16:45.463 03:30:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.463 03:30:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.463 03:30:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.463 03:30:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.463 03:30:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.463 03:30:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.463 03:30:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.463 03:30:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.463 03:30:22 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:45.463 03:30:22 -- host/async_init.sh@14 -- # null_block_size=512 00:16:45.463 03:30:22 -- host/async_init.sh@15 -- # null_bdev=null0 00:16:45.463 03:30:22 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:45.463 03:30:22 -- host/async_init.sh@20 -- # uuidgen 00:16:45.463 03:30:22 -- host/async_init.sh@20 -- # tr -d - 00:16:45.463 03:30:22 -- host/async_init.sh@20 -- # nguid=171c4fa459d04fc99f79707664326f5a 00:16:45.463 03:30:22 -- host/async_init.sh@22 -- # nvmftestinit 00:16:45.463 03:30:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:45.463 03:30:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.463 03:30:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:45.463 03:30:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:45.463 03:30:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:45.463 03:30:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.463 03:30:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.463 03:30:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.463 03:30:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:45.463 03:30:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:45.463 03:30:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.463 03:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:47.366 03:30:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:47.366 03:30:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:47.366 03:30:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:47.366 03:30:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:47.366 03:30:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:47.366 03:30:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:47.366 03:30:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:47.366 03:30:24 -- nvmf/common.sh@295 -- # net_devs=() 00:16:47.366 03:30:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:47.366 03:30:24 -- nvmf/common.sh@296 -- # e810=() 00:16:47.366 03:30:24 -- nvmf/common.sh@296 -- # local -ga e810 00:16:47.366 03:30:24 -- nvmf/common.sh@297 -- # x722=() 00:16:47.366 03:30:24 -- nvmf/common.sh@297 -- # local -ga x722 00:16:47.366 03:30:24 -- nvmf/common.sh@298 -- # mlx=() 00:16:47.366 03:30:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:47.366 03:30:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.366 03:30:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:47.366 03:30:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:47.366 03:30:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:47.366 03:30:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.366 03:30:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:47.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:47.366 03:30:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.366 03:30:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:47.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:47.366 03:30:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:47.366 03:30:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:47.366 03:30:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.366 03:30:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.366 03:30:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:47.366 03:30:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.367 03:30:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:47.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:47.367 03:30:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.367 03:30:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.367 03:30:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.367 03:30:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:47.367 03:30:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.367 03:30:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:47.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:47.367 03:30:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.367 03:30:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:47.367 03:30:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:47.367 03:30:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:47.367 03:30:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:47.367 03:30:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:47.367 03:30:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.367 03:30:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.367 03:30:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.367 03:30:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:47.367 03:30:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.367 03:30:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.367 03:30:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:47.367 03:30:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.367 03:30:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.367 03:30:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:47.367 03:30:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:47.367 03:30:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.367 03:30:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.367 03:30:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.367 03:30:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.367 03:30:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.367 03:30:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.367 03:30:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.367 03:30:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.367 03:30:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:16:47.367 00:16:47.367 --- 10.0.0.2 ping statistics --- 00:16:47.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.367 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:16:47.367 03:30:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:16:47.367 00:16:47.367 --- 10.0.0.1 ping statistics --- 00:16:47.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.367 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:47.367 03:30:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.367 03:30:24 -- nvmf/common.sh@411 -- # return 0 00:16:47.367 03:30:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:47.367 03:30:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.367 03:30:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:47.367 03:30:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:47.367 03:30:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.367 03:30:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:47.367 03:30:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:47.367 03:30:24 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:47.367 03:30:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:47.367 03:30:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:47.367 03:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:47.367 03:30:24 -- nvmf/common.sh@470 -- # nvmfpid=281830 00:16:47.367 03:30:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:47.367 03:30:24 -- nvmf/common.sh@471 -- # waitforlisten 281830 00:16:47.367 03:30:24 -- common/autotest_common.sh@817 -- # '[' -z 281830 ']' 00:16:47.367 03:30:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.367 03:30:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.367 03:30:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.367 03:30:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.367 03:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:47.367 [2024-04-19 03:30:24.747425] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:47.367 [2024-04-19 03:30:24.747505] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.367 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.367 [2024-04-19 03:30:24.812944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.367 [2024-04-19 03:30:24.923504] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.367 [2024-04-19 03:30:24.923578] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.367 [2024-04-19 03:30:24.923592] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.367 [2024-04-19 03:30:24.923618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.367 [2024-04-19 03:30:24.923629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.367 [2024-04-19 03:30:24.923659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.625 03:30:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.625 03:30:25 -- common/autotest_common.sh@850 -- # return 0 00:16:47.625 03:30:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:47.625 03:30:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.625 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.625 03:30:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.625 03:30:25 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:47.625 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.625 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.625 [2024-04-19 03:30:25.078098] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.625 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.625 03:30:25 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:47.626 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.626 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.626 null0 00:16:47.626 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.626 03:30:25 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:47.626 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.626 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.626 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.626 03:30:25 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:47.626 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.626 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.626 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.626 03:30:25 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 171c4fa459d04fc99f79707664326f5a 00:16:47.626 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.626 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.626 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.626 03:30:25 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:47.626 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.626 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.626 [2024-04-19 03:30:25.118393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.626 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.626 03:30:25 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:47.626 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.626 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.884 nvme0n1 00:16:47.884 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.884 03:30:25 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:47.884 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.884 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.884 [ 00:16:47.884 { 00:16:47.884 "name": "nvme0n1", 00:16:47.884 "aliases": [ 00:16:47.884 "171c4fa4-59d0-4fc9-9f79-707664326f5a" 00:16:47.884 ], 00:16:47.884 "product_name": "NVMe disk", 00:16:47.884 "block_size": 512, 00:16:47.884 "num_blocks": 2097152, 00:16:47.884 "uuid": "171c4fa4-59d0-4fc9-9f79-707664326f5a", 00:16:47.884 "assigned_rate_limits": { 00:16:47.884 "rw_ios_per_sec": 0, 00:16:47.884 "rw_mbytes_per_sec": 0, 00:16:47.884 "r_mbytes_per_sec": 0, 00:16:47.884 "w_mbytes_per_sec": 0 00:16:47.884 }, 00:16:47.884 "claimed": false, 00:16:47.884 "zoned": false, 00:16:47.884 "supported_io_types": { 00:16:47.884 "read": true, 00:16:47.884 "write": true, 00:16:47.884 "unmap": false, 00:16:47.884 "write_zeroes": true, 00:16:47.884 "flush": true, 00:16:47.884 "reset": true, 00:16:47.884 "compare": true, 00:16:47.884 "compare_and_write": true, 00:16:47.884 "abort": true, 00:16:47.884 "nvme_admin": true, 00:16:47.884 "nvme_io": true 00:16:47.884 }, 00:16:47.884 "memory_domains": [ 00:16:47.884 { 00:16:47.884 "dma_device_id": "system", 00:16:47.884 "dma_device_type": 1 00:16:47.884 } 00:16:47.884 ], 00:16:47.884 "driver_specific": { 00:16:47.884 "nvme": [ 00:16:47.884 { 00:16:47.884 "trid": { 00:16:47.884 "trtype": "TCP", 00:16:47.884 "adrfam": "IPv4", 00:16:47.884 "traddr": "10.0.0.2", 00:16:47.884 "trsvcid": "4420", 00:16:47.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:47.884 }, 00:16:47.884 "ctrlr_data": { 00:16:47.884 "cntlid": 1, 00:16:47.884 "vendor_id": "0x8086", 00:16:47.884 "model_number": "SPDK bdev Controller", 00:16:47.884 "serial_number": "00000000000000000000", 00:16:47.884 "firmware_revision": "24.05", 00:16:47.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:47.884 "oacs": { 00:16:47.884 "security": 0, 00:16:47.884 "format": 0, 00:16:47.884 "firmware": 0, 00:16:47.884 "ns_manage": 0 00:16:47.884 }, 00:16:47.884 "multi_ctrlr": true, 00:16:47.884 "ana_reporting": false 00:16:47.884 }, 00:16:47.884 "vs": { 00:16:47.884 "nvme_version": "1.3" 00:16:47.884 }, 00:16:47.884 "ns_data": { 00:16:47.884 "id": 1, 00:16:47.884 "can_share": true 00:16:47.884 } 00:16:47.884 } 00:16:47.884 ], 00:16:47.884 "mp_policy": "active_passive" 00:16:47.884 } 00:16:47.884 } 00:16:47.884 ] 00:16:47.884 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.884 03:30:25 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:47.884 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.884 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.884 [2024-04-19 03:30:25.366889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:47.884 [2024-04-19 03:30:25.366981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28f90 (9): Bad file descriptor 00:16:48.145 [2024-04-19 03:30:25.499538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 [ 00:16:48.145 { 00:16:48.145 "name": "nvme0n1", 00:16:48.145 "aliases": [ 00:16:48.145 "171c4fa4-59d0-4fc9-9f79-707664326f5a" 00:16:48.145 ], 00:16:48.145 "product_name": "NVMe disk", 00:16:48.145 "block_size": 512, 00:16:48.145 "num_blocks": 2097152, 00:16:48.145 "uuid": "171c4fa4-59d0-4fc9-9f79-707664326f5a", 00:16:48.145 "assigned_rate_limits": { 00:16:48.145 "rw_ios_per_sec": 0, 00:16:48.145 "rw_mbytes_per_sec": 0, 00:16:48.145 "r_mbytes_per_sec": 0, 00:16:48.145 "w_mbytes_per_sec": 0 00:16:48.145 }, 00:16:48.145 "claimed": false, 00:16:48.145 "zoned": false, 00:16:48.145 "supported_io_types": { 00:16:48.145 "read": true, 00:16:48.145 "write": true, 00:16:48.145 "unmap": false, 00:16:48.145 "write_zeroes": true, 00:16:48.145 "flush": true, 00:16:48.145 "reset": true, 00:16:48.145 "compare": true, 00:16:48.145 "compare_and_write": true, 00:16:48.145 "abort": true, 00:16:48.145 "nvme_admin": true, 00:16:48.145 "nvme_io": true 00:16:48.145 }, 00:16:48.145 "memory_domains": [ 00:16:48.145 { 00:16:48.145 "dma_device_id": "system", 00:16:48.145 "dma_device_type": 1 00:16:48.145 } 00:16:48.145 ], 00:16:48.145 "driver_specific": { 00:16:48.145 "nvme": [ 00:16:48.145 { 00:16:48.145 "trid": { 00:16:48.145 "trtype": "TCP", 00:16:48.145 "adrfam": "IPv4", 00:16:48.145 "traddr": "10.0.0.2", 00:16:48.145 "trsvcid": "4420", 00:16:48.145 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:48.145 }, 00:16:48.145 "ctrlr_data": { 00:16:48.145 "cntlid": 2, 00:16:48.145 "vendor_id": "0x8086", 00:16:48.145 "model_number": "SPDK bdev Controller", 00:16:48.145 "serial_number": "00000000000000000000", 00:16:48.145 "firmware_revision": "24.05", 00:16:48.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:48.145 "oacs": { 00:16:48.145 "security": 0, 00:16:48.145 "format": 0, 00:16:48.145 "firmware": 0, 00:16:48.145 "ns_manage": 0 00:16:48.145 }, 00:16:48.145 "multi_ctrlr": true, 00:16:48.145 "ana_reporting": false 00:16:48.145 }, 00:16:48.145 "vs": { 00:16:48.145 "nvme_version": "1.3" 00:16:48.145 }, 00:16:48.145 "ns_data": { 00:16:48.145 "id": 1, 00:16:48.145 "can_share": true 00:16:48.145 } 00:16:48.145 } 00:16:48.145 ], 00:16:48.145 "mp_policy": "active_passive" 00:16:48.145 } 00:16:48.145 } 00:16:48.145 ] 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@53 -- # mktemp 00:16:48.145 03:30:25 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mfcCuKs4gQ 00:16:48.145 03:30:25 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:48.145 03:30:25 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mfcCuKs4gQ 00:16:48.145 03:30:25 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 [2024-04-19 03:30:25.547510] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:48.145 [2024-04-19 03:30:25.547627] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mfcCuKs4gQ 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 [2024-04-19 03:30:25.555518] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mfcCuKs4gQ 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 [2024-04-19 03:30:25.563533] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.145 [2024-04-19 03:30:25.563602] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:48.145 nvme0n1 00:16:48.145 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.145 03:30:25 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:48.145 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.145 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.145 [ 00:16:48.145 { 00:16:48.145 "name": "nvme0n1", 00:16:48.145 "aliases": [ 00:16:48.145 "171c4fa4-59d0-4fc9-9f79-707664326f5a" 00:16:48.145 ], 00:16:48.145 "product_name": "NVMe disk", 00:16:48.145 "block_size": 512, 00:16:48.145 "num_blocks": 2097152, 00:16:48.145 "uuid": "171c4fa4-59d0-4fc9-9f79-707664326f5a", 00:16:48.145 "assigned_rate_limits": { 00:16:48.145 "rw_ios_per_sec": 0, 00:16:48.145 "rw_mbytes_per_sec": 0, 00:16:48.145 "r_mbytes_per_sec": 0, 00:16:48.145 "w_mbytes_per_sec": 0 00:16:48.145 }, 00:16:48.145 "claimed": false, 00:16:48.145 "zoned": false, 00:16:48.145 "supported_io_types": { 00:16:48.145 "read": true, 00:16:48.145 "write": true, 00:16:48.145 "unmap": false, 00:16:48.145 "write_zeroes": true, 00:16:48.145 "flush": true, 00:16:48.145 "reset": true, 00:16:48.145 "compare": true, 00:16:48.145 "compare_and_write": true, 00:16:48.145 "abort": true, 00:16:48.145 "nvme_admin": true, 00:16:48.145 "nvme_io": true 00:16:48.145 }, 00:16:48.145 "memory_domains": [ 00:16:48.145 { 00:16:48.145 "dma_device_id": "system", 00:16:48.145 "dma_device_type": 1 00:16:48.145 } 00:16:48.145 ], 00:16:48.145 "driver_specific": { 00:16:48.145 "nvme": [ 00:16:48.145 { 00:16:48.145 "trid": { 00:16:48.145 "trtype": "TCP", 00:16:48.145 "adrfam": "IPv4", 00:16:48.145 "traddr": "10.0.0.2", 00:16:48.145 "trsvcid": "4421", 00:16:48.145 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:48.145 }, 00:16:48.145 "ctrlr_data": { 00:16:48.145 "cntlid": 3, 00:16:48.145 "vendor_id": "0x8086", 00:16:48.145 "model_number": "SPDK bdev Controller", 00:16:48.145 "serial_number": "00000000000000000000", 00:16:48.145 "firmware_revision": "24.05", 00:16:48.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:48.145 "oacs": { 00:16:48.145 "security": 0, 00:16:48.145 "format": 0, 00:16:48.145 "firmware": 0, 00:16:48.145 "ns_manage": 0 00:16:48.145 }, 00:16:48.145 "multi_ctrlr": true, 00:16:48.145 "ana_reporting": false 00:16:48.145 }, 00:16:48.145 "vs": { 00:16:48.145 "nvme_version": "1.3" 00:16:48.145 }, 00:16:48.145 "ns_data": { 00:16:48.146 "id": 1, 00:16:48.146 "can_share": true 00:16:48.146 } 00:16:48.146 } 00:16:48.146 ], 00:16:48.146 "mp_policy": "active_passive" 00:16:48.146 } 00:16:48.146 } 00:16:48.146 ] 00:16:48.146 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.146 03:30:25 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.146 03:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.146 03:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:48.146 03:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.146 03:30:25 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.mfcCuKs4gQ 00:16:48.146 03:30:25 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:48.146 03:30:25 -- host/async_init.sh@78 -- # nvmftestfini 00:16:48.146 03:30:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:48.146 03:30:25 -- nvmf/common.sh@117 -- # sync 00:16:48.146 03:30:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.146 03:30:25 -- nvmf/common.sh@120 -- # set +e 00:16:48.146 03:30:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.146 03:30:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.146 rmmod nvme_tcp 00:16:48.146 rmmod nvme_fabrics 00:16:48.435 rmmod nvme_keyring 00:16:48.435 03:30:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.435 03:30:25 -- nvmf/common.sh@124 -- # set -e 00:16:48.435 03:30:25 -- nvmf/common.sh@125 -- # return 0 00:16:48.435 03:30:25 -- nvmf/common.sh@478 -- # '[' -n 281830 ']' 00:16:48.435 03:30:25 -- nvmf/common.sh@479 -- # killprocess 281830 00:16:48.435 03:30:25 -- common/autotest_common.sh@936 -- # '[' -z 281830 ']' 00:16:48.435 03:30:25 -- common/autotest_common.sh@940 -- # kill -0 281830 00:16:48.435 03:30:25 -- common/autotest_common.sh@941 -- # uname 00:16:48.435 03:30:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.435 03:30:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 281830 00:16:48.435 03:30:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.435 03:30:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.435 03:30:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 281830' 00:16:48.435 killing process with pid 281830 00:16:48.435 03:30:25 -- common/autotest_common.sh@955 -- # kill 281830 00:16:48.435 [2024-04-19 03:30:25.763067] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:48.435 [2024-04-19 03:30:25.763100] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:48.435 03:30:25 -- common/autotest_common.sh@960 -- # wait 281830 00:16:48.694 03:30:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:48.694 03:30:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:48.694 03:30:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:48.694 03:30:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.694 03:30:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.694 03:30:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.694 03:30:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.694 03:30:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.599 03:30:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.599 00:16:50.599 real 0m5.459s 00:16:50.599 user 0m2.078s 00:16:50.599 sys 0m1.749s 00:16:50.599 03:30:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.599 03:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.599 ************************************ 00:16:50.599 END TEST nvmf_async_init 00:16:50.599 ************************************ 00:16:50.599 03:30:28 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:50.599 03:30:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.599 03:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.599 03:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.858 ************************************ 00:16:50.858 START TEST dma 00:16:50.858 ************************************ 00:16:50.858 03:30:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:50.858 * Looking for test storage... 00:16:50.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:50.858 03:30:28 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.858 03:30:28 -- nvmf/common.sh@7 -- # uname -s 00:16:50.858 03:30:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.858 03:30:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.858 03:30:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.858 03:30:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.858 03:30:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.858 03:30:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.858 03:30:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.858 03:30:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.858 03:30:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.858 03:30:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.858 03:30:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.858 03:30:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.858 03:30:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.858 03:30:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.858 03:30:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.858 03:30:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.858 03:30:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.858 03:30:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.858 03:30:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.858 03:30:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.858 03:30:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.858 03:30:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.858 03:30:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.858 03:30:28 -- paths/export.sh@5 -- # export PATH 00:16:50.858 03:30:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.858 03:30:28 -- nvmf/common.sh@47 -- # : 0 00:16:50.858 03:30:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.858 03:30:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.858 03:30:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.858 03:30:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.858 03:30:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.858 03:30:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.858 03:30:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.858 03:30:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.858 03:30:28 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:50.858 03:30:28 -- host/dma.sh@13 -- # exit 0 00:16:50.858 00:16:50.858 real 0m0.073s 00:16:50.858 user 0m0.036s 00:16:50.858 sys 0m0.042s 00:16:50.858 03:30:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.858 03:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.858 ************************************ 00:16:50.858 END TEST dma 00:16:50.858 ************************************ 00:16:50.858 03:30:28 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:50.858 03:30:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.858 03:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.858 03:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.858 ************************************ 00:16:50.858 START TEST nvmf_identify 00:16:50.858 ************************************ 00:16:50.858 03:30:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:50.858 * Looking for test storage... 00:16:51.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:51.117 03:30:28 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.117 03:30:28 -- nvmf/common.sh@7 -- # uname -s 00:16:51.117 03:30:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.117 03:30:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.117 03:30:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.117 03:30:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.117 03:30:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.117 03:30:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.117 03:30:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.117 03:30:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.117 03:30:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.117 03:30:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.117 03:30:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.117 03:30:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.117 03:30:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.117 03:30:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.117 03:30:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.117 03:30:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.117 03:30:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.117 03:30:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.117 03:30:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.117 03:30:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.117 03:30:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.117 03:30:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.117 03:30:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.117 03:30:28 -- paths/export.sh@5 -- # export PATH 00:16:51.117 03:30:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.117 03:30:28 -- nvmf/common.sh@47 -- # : 0 00:16:51.117 03:30:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.117 03:30:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.117 03:30:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.117 03:30:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.117 03:30:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.117 03:30:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.117 03:30:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.117 03:30:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.117 03:30:28 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.117 03:30:28 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.117 03:30:28 -- host/identify.sh@14 -- # nvmftestinit 00:16:51.117 03:30:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:51.117 03:30:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.117 03:30:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:51.117 03:30:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:51.117 03:30:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:51.117 03:30:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.117 03:30:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.117 03:30:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.117 03:30:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:51.117 03:30:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:51.117 03:30:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.117 03:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:53.017 03:30:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:53.017 03:30:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.017 03:30:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.017 03:30:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.017 03:30:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.017 03:30:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.017 03:30:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.017 03:30:30 -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.017 03:30:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.017 03:30:30 -- nvmf/common.sh@296 -- # e810=() 00:16:53.017 03:30:30 -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.017 03:30:30 -- nvmf/common.sh@297 -- # x722=() 00:16:53.017 03:30:30 -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.017 03:30:30 -- nvmf/common.sh@298 -- # mlx=() 00:16:53.017 03:30:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.017 03:30:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.017 03:30:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.017 03:30:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.017 03:30:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.017 03:30:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.017 03:30:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:53.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:53.017 03:30:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.017 03:30:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.018 03:30:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:53.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:53.018 03:30:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.018 03:30:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.018 03:30:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.018 03:30:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:53.018 03:30:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.018 03:30:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:53.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:53.018 03:30:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.018 03:30:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.018 03:30:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.018 03:30:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:53.018 03:30:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.018 03:30:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:53.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:53.018 03:30:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.018 03:30:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:53.018 03:30:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:53.018 03:30:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:53.018 03:30:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:53.018 03:30:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.018 03:30:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.018 03:30:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.018 03:30:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.018 03:30:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.018 03:30:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.018 03:30:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.018 03:30:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.018 03:30:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.018 03:30:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.018 03:30:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.018 03:30:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.018 03:30:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.018 03:30:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.018 03:30:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.018 03:30:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.018 03:30:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.018 03:30:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.018 03:30:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.276 03:30:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:16:53.276 00:16:53.276 --- 10.0.0.2 ping statistics --- 00:16:53.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.276 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:53.276 03:30:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:16:53.276 00:16:53.276 --- 10.0.0.1 ping statistics --- 00:16:53.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.276 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:53.276 03:30:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.276 03:30:30 -- nvmf/common.sh@411 -- # return 0 00:16:53.276 03:30:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:53.276 03:30:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.276 03:30:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:53.276 03:30:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:53.276 03:30:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.276 03:30:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:53.276 03:30:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:53.276 03:30:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:53.276 03:30:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:53.276 03:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.276 03:30:30 -- host/identify.sh@19 -- # nvmfpid=283978 00:16:53.276 03:30:30 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.276 03:30:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.276 03:30:30 -- host/identify.sh@23 -- # waitforlisten 283978 00:16:53.276 03:30:30 -- common/autotest_common.sh@817 -- # '[' -z 283978 ']' 00:16:53.276 03:30:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.276 03:30:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.276 03:30:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.276 03:30:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.276 03:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.276 [2024-04-19 03:30:30.657989] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:53.276 [2024-04-19 03:30:30.658068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.276 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.276 [2024-04-19 03:30:30.724676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.276 [2024-04-19 03:30:30.832872] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.276 [2024-04-19 03:30:30.832926] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.276 [2024-04-19 03:30:30.832950] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.276 [2024-04-19 03:30:30.832978] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.276 [2024-04-19 03:30:30.832988] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.276 [2024-04-19 03:30:30.833075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.276 [2024-04-19 03:30:30.833120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.276 [2024-04-19 03:30:30.833177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.276 [2024-04-19 03:30:30.833180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.533 03:30:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:53.533 03:30:30 -- common/autotest_common.sh@850 -- # return 0 00:16:53.533 03:30:30 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.533 03:30:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.533 03:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.533 [2024-04-19 03:30:30.961000] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.533 03:30:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.533 03:30:30 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:53.534 03:30:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:53.534 03:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 03:30:30 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.534 03:30:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.534 03:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 Malloc0 00:16:53.534 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.534 03:30:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.534 03:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.534 03:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.534 03:30:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:53.534 03:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.534 03:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.534 03:30:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.534 03:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.534 03:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 [2024-04-19 03:30:31.032139] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.534 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.534 03:30:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.534 03:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.534 03:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.534 03:30:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:53.534 03:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.534 03:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 [2024-04-19 03:30:31.047930] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:53.534 [ 00:16:53.534 { 00:16:53.534 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:53.534 "subtype": "Discovery", 00:16:53.534 "listen_addresses": [ 00:16:53.534 { 00:16:53.534 "transport": "TCP", 00:16:53.534 "trtype": "TCP", 00:16:53.534 "adrfam": "IPv4", 00:16:53.534 "traddr": "10.0.0.2", 00:16:53.534 "trsvcid": "4420" 00:16:53.534 } 00:16:53.534 ], 00:16:53.534 "allow_any_host": true, 00:16:53.534 "hosts": [] 00:16:53.534 }, 00:16:53.534 { 00:16:53.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.534 "subtype": "NVMe", 00:16:53.534 "listen_addresses": [ 00:16:53.534 { 00:16:53.534 "transport": "TCP", 00:16:53.534 "trtype": "TCP", 00:16:53.534 "adrfam": "IPv4", 00:16:53.534 "traddr": "10.0.0.2", 00:16:53.534 "trsvcid": "4420" 00:16:53.534 } 00:16:53.534 ], 00:16:53.534 "allow_any_host": true, 00:16:53.534 "hosts": [], 00:16:53.534 "serial_number": "SPDK00000000000001", 00:16:53.534 "model_number": "SPDK bdev Controller", 00:16:53.534 "max_namespaces": 32, 00:16:53.534 "min_cntlid": 1, 00:16:53.534 "max_cntlid": 65519, 00:16:53.534 "namespaces": [ 00:16:53.534 { 00:16:53.534 "nsid": 1, 00:16:53.534 "bdev_name": "Malloc0", 00:16:53.534 "name": "Malloc0", 00:16:53.534 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:53.534 "eui64": "ABCDEF0123456789", 00:16:53.534 "uuid": "7abe3157-c9b9-45a8-b2df-1e25e6d32634" 00:16:53.534 } 00:16:53.534 ] 00:16:53.534 } 00:16:53.534 ] 00:16:53.534 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.534 03:30:31 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:53.534 [2024-04-19 03:30:31.069502] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:53.534 [2024-04-19 03:30:31.069539] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284115 ] 00:16:53.534 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.794 [2024-04-19 03:30:31.103820] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:53.794 [2024-04-19 03:30:31.103881] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:53.794 [2024-04-19 03:30:31.103890] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:53.794 [2024-04-19 03:30:31.103906] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:53.794 [2024-04-19 03:30:31.103919] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:53.794 [2024-04-19 03:30:31.104233] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:53.794 [2024-04-19 03:30:31.104288] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a5cd00 0 00:16:53.794 [2024-04-19 03:30:31.118399] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:53.794 [2024-04-19 03:30:31.118430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:53.794 [2024-04-19 03:30:31.118438] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:53.794 [2024-04-19 03:30:31.118443] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:53.794 [2024-04-19 03:30:31.118504] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.118516] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.118524] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.118542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:53.795 [2024-04-19 03:30:31.118568] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.126396] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.126413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.126421] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126429] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.126464] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:53.795 [2024-04-19 03:30:31.126476] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:53.795 [2024-04-19 03:30:31.126486] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:53.795 [2024-04-19 03:30:31.126508] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126516] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.126535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.795 [2024-04-19 03:30:31.126559] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.126709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.126725] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.126732] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126743] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.126755] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:53.795 [2024-04-19 03:30:31.126768] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:53.795 [2024-04-19 03:30:31.126780] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126788] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126795] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.126806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.795 [2024-04-19 03:30:31.126828] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.126954] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.126969] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.126976] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.126983] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.126993] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:53.795 [2024-04-19 03:30:31.127007] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:53.795 [2024-04-19 03:30:31.127019] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127027] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127034] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.127045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.795 [2024-04-19 03:30:31.127066] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.127189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.127204] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.127211] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127218] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.127228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:53.795 [2024-04-19 03:30:31.127245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127254] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127261] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.127271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.795 [2024-04-19 03:30:31.127293] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.127421] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.127436] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.127443] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127450] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.127460] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:53.795 [2024-04-19 03:30:31.127473] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:53.795 [2024-04-19 03:30:31.127486] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:53.795 [2024-04-19 03:30:31.127596] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:53.795 [2024-04-19 03:30:31.127605] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:53.795 [2024-04-19 03:30:31.127619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127627] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.127644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.795 [2024-04-19 03:30:31.127666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.127799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.127811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.127817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.127834] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:53.795 [2024-04-19 03:30:31.127850] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127859] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.127865] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.127876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.795 [2024-04-19 03:30:31.127897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.795 [2024-04-19 03:30:31.128021] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.795 [2024-04-19 03:30:31.128036] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.795 [2024-04-19 03:30:31.128043] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.128050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.795 [2024-04-19 03:30:31.128059] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:53.795 [2024-04-19 03:30:31.128068] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:53.795 [2024-04-19 03:30:31.128081] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:53.795 [2024-04-19 03:30:31.128100] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:53.795 [2024-04-19 03:30:31.128118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.795 [2024-04-19 03:30:31.128127] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.795 [2024-04-19 03:30:31.128138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.796 [2024-04-19 03:30:31.128181] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.796 [2024-04-19 03:30:31.128362] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:53.796 [2024-04-19 03:30:31.128378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:53.796 [2024-04-19 03:30:31.128395] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.128403] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5cd00): datao=0, datal=4096, cccid=0 00:16:53.796 [2024-04-19 03:30:31.128410] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abbec0) on tqpair(0x1a5cd00): expected_datao=0, payload_size=4096 00:16:53.796 [2024-04-19 03:30:31.128419] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.128437] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.128446] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.796 [2024-04-19 03:30:31.169519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.796 [2024-04-19 03:30:31.169526] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.796 [2024-04-19 03:30:31.169548] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:53.796 [2024-04-19 03:30:31.169557] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:53.796 [2024-04-19 03:30:31.169565] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:53.796 [2024-04-19 03:30:31.169574] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:53.796 [2024-04-19 03:30:31.169582] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:53.796 [2024-04-19 03:30:31.169590] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:53.796 [2024-04-19 03:30:31.169605] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:53.796 [2024-04-19 03:30:31.169617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.169644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.796 [2024-04-19 03:30:31.169667] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.796 [2024-04-19 03:30:31.169797] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.796 [2024-04-19 03:30:31.169813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.796 [2024-04-19 03:30:31.169821] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169828] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abbec0) on tqpair=0x1a5cd00 00:16:53.796 [2024-04-19 03:30:31.169841] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169849] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169856] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.169867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.796 [2024-04-19 03:30:31.169877] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169885] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.169907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.796 [2024-04-19 03:30:31.169917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169925] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.169955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.796 [2024-04-19 03:30:31.169965] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169972] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.169978] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.169987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.796 [2024-04-19 03:30:31.169996] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:53.796 [2024-04-19 03:30:31.170015] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:53.796 [2024-04-19 03:30:31.170027] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.170035] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.170045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.796 [2024-04-19 03:30:31.170067] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbec0, cid 0, qid 0 00:16:53.796 [2024-04-19 03:30:31.170093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc020, cid 1, qid 0 00:16:53.796 [2024-04-19 03:30:31.170102] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc180, cid 2, qid 0 00:16:53.796 [2024-04-19 03:30:31.170109] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc2e0, cid 3, qid 0 00:16:53.796 [2024-04-19 03:30:31.170117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc440, cid 4, qid 0 00:16:53.796 [2024-04-19 03:30:31.170274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.796 [2024-04-19 03:30:31.170289] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.796 [2024-04-19 03:30:31.170296] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.170303] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc440) on tqpair=0x1a5cd00 00:16:53.796 [2024-04-19 03:30:31.170314] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:53.796 [2024-04-19 03:30:31.170323] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:53.796 [2024-04-19 03:30:31.170341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.170351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.170362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.796 [2024-04-19 03:30:31.174389] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc440, cid 4, qid 0 00:16:53.796 [2024-04-19 03:30:31.174418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:53.796 [2024-04-19 03:30:31.174444] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:53.796 [2024-04-19 03:30:31.174451] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174462] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5cd00): datao=0, datal=4096, cccid=4 00:16:53.796 [2024-04-19 03:30:31.174471] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abc440) on tqpair(0x1a5cd00): expected_datao=0, payload_size=4096 00:16:53.796 [2024-04-19 03:30:31.174479] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174490] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174498] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174506] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.796 [2024-04-19 03:30:31.174515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.796 [2024-04-19 03:30:31.174522] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174529] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc440) on tqpair=0x1a5cd00 00:16:53.796 [2024-04-19 03:30:31.174551] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:53.796 [2024-04-19 03:30:31.174584] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174594] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.174606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.796 [2024-04-19 03:30:31.174617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.796 [2024-04-19 03:30:31.174633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5cd00) 00:16:53.796 [2024-04-19 03:30:31.174643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.796 [2024-04-19 03:30:31.174672] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc440, cid 4, qid 0 00:16:53.796 [2024-04-19 03:30:31.174684] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc5a0, cid 5, qid 0 00:16:53.796 [2024-04-19 03:30:31.174857] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:53.796 [2024-04-19 03:30:31.174874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:53.796 [2024-04-19 03:30:31.174881] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.174888] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5cd00): datao=0, datal=1024, cccid=4 00:16:53.797 [2024-04-19 03:30:31.174897] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abc440) on tqpair(0x1a5cd00): expected_datao=0, payload_size=1024 00:16:53.797 [2024-04-19 03:30:31.174905] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.174915] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.174924] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.174933] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.797 [2024-04-19 03:30:31.174941] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.797 [2024-04-19 03:30:31.174949] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.174956] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc5a0) on tqpair=0x1a5cd00 00:16:53.797 [2024-04-19 03:30:31.215517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.797 [2024-04-19 03:30:31.215536] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.797 [2024-04-19 03:30:31.215544] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc440) on tqpair=0x1a5cd00 00:16:53.797 [2024-04-19 03:30:31.215570] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215585] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5cd00) 00:16:53.797 [2024-04-19 03:30:31.215598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.797 [2024-04-19 03:30:31.215628] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc440, cid 4, qid 0 00:16:53.797 [2024-04-19 03:30:31.215775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:53.797 [2024-04-19 03:30:31.215788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:53.797 [2024-04-19 03:30:31.215794] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215801] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5cd00): datao=0, datal=3072, cccid=4 00:16:53.797 [2024-04-19 03:30:31.215809] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abc440) on tqpair(0x1a5cd00): expected_datao=0, payload_size=3072 00:16:53.797 [2024-04-19 03:30:31.215816] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215826] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215834] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.797 [2024-04-19 03:30:31.215871] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.797 [2024-04-19 03:30:31.215878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215885] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc440) on tqpair=0x1a5cd00 00:16:53.797 [2024-04-19 03:30:31.215901] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.215910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5cd00) 00:16:53.797 [2024-04-19 03:30:31.215921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.797 [2024-04-19 03:30:31.215949] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc440, cid 4, qid 0 00:16:53.797 [2024-04-19 03:30:31.216089] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:53.797 [2024-04-19 03:30:31.216104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:53.797 [2024-04-19 03:30:31.216111] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.216117] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5cd00): datao=0, datal=8, cccid=4 00:16:53.797 [2024-04-19 03:30:31.216125] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abc440) on tqpair(0x1a5cd00): expected_datao=0, payload_size=8 00:16:53.797 [2024-04-19 03:30:31.216133] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.216142] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.216150] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.256505] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.797 [2024-04-19 03:30:31.256523] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.797 [2024-04-19 03:30:31.256530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.797 [2024-04-19 03:30:31.256537] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc440) on tqpair=0x1a5cd00 00:16:53.797 ===================================================== 00:16:53.797 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:53.797 ===================================================== 00:16:53.797 Controller Capabilities/Features 00:16:53.797 ================================ 00:16:53.797 Vendor ID: 0000 00:16:53.797 Subsystem Vendor ID: 0000 00:16:53.797 Serial Number: .................... 00:16:53.797 Model Number: ........................................ 00:16:53.797 Firmware Version: 24.05 00:16:53.797 Recommended Arb Burst: 0 00:16:53.797 IEEE OUI Identifier: 00 00 00 00:16:53.797 Multi-path I/O 00:16:53.797 May have multiple subsystem ports: No 00:16:53.797 May have multiple controllers: No 00:16:53.797 Associated with SR-IOV VF: No 00:16:53.797 Max Data Transfer Size: 131072 00:16:53.797 Max Number of Namespaces: 0 00:16:53.797 Max Number of I/O Queues: 1024 00:16:53.797 NVMe Specification Version (VS): 1.3 00:16:53.797 NVMe Specification Version (Identify): 1.3 00:16:53.797 Maximum Queue Entries: 128 00:16:53.797 Contiguous Queues Required: Yes 00:16:53.797 Arbitration Mechanisms Supported 00:16:53.797 Weighted Round Robin: Not Supported 00:16:53.797 Vendor Specific: Not Supported 00:16:53.797 Reset Timeout: 15000 ms 00:16:53.797 Doorbell Stride: 4 bytes 00:16:53.797 NVM Subsystem Reset: Not Supported 00:16:53.797 Command Sets Supported 00:16:53.797 NVM Command Set: Supported 00:16:53.797 Boot Partition: Not Supported 00:16:53.797 Memory Page Size Minimum: 4096 bytes 00:16:53.797 Memory Page Size Maximum: 4096 bytes 00:16:53.797 Persistent Memory Region: Not Supported 00:16:53.797 Optional Asynchronous Events Supported 00:16:53.797 Namespace Attribute Notices: Not Supported 00:16:53.797 Firmware Activation Notices: Not Supported 00:16:53.797 ANA Change Notices: Not Supported 00:16:53.797 PLE Aggregate Log Change Notices: Not Supported 00:16:53.797 LBA Status Info Alert Notices: Not Supported 00:16:53.797 EGE Aggregate Log Change Notices: Not Supported 00:16:53.797 Normal NVM Subsystem Shutdown event: Not Supported 00:16:53.797 Zone Descriptor Change Notices: Not Supported 00:16:53.797 Discovery Log Change Notices: Supported 00:16:53.797 Controller Attributes 00:16:53.797 128-bit Host Identifier: Not Supported 00:16:53.797 Non-Operational Permissive Mode: Not Supported 00:16:53.797 NVM Sets: Not Supported 00:16:53.797 Read Recovery Levels: Not Supported 00:16:53.797 Endurance Groups: Not Supported 00:16:53.797 Predictable Latency Mode: Not Supported 00:16:53.797 Traffic Based Keep ALive: Not Supported 00:16:53.797 Namespace Granularity: Not Supported 00:16:53.797 SQ Associations: Not Supported 00:16:53.797 UUID List: Not Supported 00:16:53.797 Multi-Domain Subsystem: Not Supported 00:16:53.797 Fixed Capacity Management: Not Supported 00:16:53.797 Variable Capacity Management: Not Supported 00:16:53.797 Delete Endurance Group: Not Supported 00:16:53.797 Delete NVM Set: Not Supported 00:16:53.797 Extended LBA Formats Supported: Not Supported 00:16:53.797 Flexible Data Placement Supported: Not Supported 00:16:53.797 00:16:53.797 Controller Memory Buffer Support 00:16:53.797 ================================ 00:16:53.797 Supported: No 00:16:53.797 00:16:53.797 Persistent Memory Region Support 00:16:53.797 ================================ 00:16:53.797 Supported: No 00:16:53.797 00:16:53.797 Admin Command Set Attributes 00:16:53.797 ============================ 00:16:53.797 Security Send/Receive: Not Supported 00:16:53.797 Format NVM: Not Supported 00:16:53.797 Firmware Activate/Download: Not Supported 00:16:53.797 Namespace Management: Not Supported 00:16:53.797 Device Self-Test: Not Supported 00:16:53.797 Directives: Not Supported 00:16:53.797 NVMe-MI: Not Supported 00:16:53.797 Virtualization Management: Not Supported 00:16:53.797 Doorbell Buffer Config: Not Supported 00:16:53.797 Get LBA Status Capability: Not Supported 00:16:53.797 Command & Feature Lockdown Capability: Not Supported 00:16:53.797 Abort Command Limit: 1 00:16:53.797 Async Event Request Limit: 4 00:16:53.798 Number of Firmware Slots: N/A 00:16:53.798 Firmware Slot 1 Read-Only: N/A 00:16:53.798 Firmware Activation Without Reset: N/A 00:16:53.798 Multiple Update Detection Support: N/A 00:16:53.798 Firmware Update Granularity: No Information Provided 00:16:53.798 Per-Namespace SMART Log: No 00:16:53.798 Asymmetric Namespace Access Log Page: Not Supported 00:16:53.798 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:53.798 Command Effects Log Page: Not Supported 00:16:53.798 Get Log Page Extended Data: Supported 00:16:53.798 Telemetry Log Pages: Not Supported 00:16:53.798 Persistent Event Log Pages: Not Supported 00:16:53.798 Supported Log Pages Log Page: May Support 00:16:53.798 Commands Supported & Effects Log Page: Not Supported 00:16:53.798 Feature Identifiers & Effects Log Page:May Support 00:16:53.798 NVMe-MI Commands & Effects Log Page: May Support 00:16:53.798 Data Area 4 for Telemetry Log: Not Supported 00:16:53.798 Error Log Page Entries Supported: 128 00:16:53.798 Keep Alive: Not Supported 00:16:53.798 00:16:53.798 NVM Command Set Attributes 00:16:53.798 ========================== 00:16:53.798 Submission Queue Entry Size 00:16:53.798 Max: 1 00:16:53.798 Min: 1 00:16:53.798 Completion Queue Entry Size 00:16:53.798 Max: 1 00:16:53.798 Min: 1 00:16:53.798 Number of Namespaces: 0 00:16:53.798 Compare Command: Not Supported 00:16:53.798 Write Uncorrectable Command: Not Supported 00:16:53.798 Dataset Management Command: Not Supported 00:16:53.798 Write Zeroes Command: Not Supported 00:16:53.798 Set Features Save Field: Not Supported 00:16:53.798 Reservations: Not Supported 00:16:53.798 Timestamp: Not Supported 00:16:53.798 Copy: Not Supported 00:16:53.798 Volatile Write Cache: Not Present 00:16:53.798 Atomic Write Unit (Normal): 1 00:16:53.798 Atomic Write Unit (PFail): 1 00:16:53.798 Atomic Compare & Write Unit: 1 00:16:53.798 Fused Compare & Write: Supported 00:16:53.798 Scatter-Gather List 00:16:53.798 SGL Command Set: Supported 00:16:53.798 SGL Keyed: Supported 00:16:53.798 SGL Bit Bucket Descriptor: Not Supported 00:16:53.798 SGL Metadata Pointer: Not Supported 00:16:53.798 Oversized SGL: Not Supported 00:16:53.798 SGL Metadata Address: Not Supported 00:16:53.798 SGL Offset: Supported 00:16:53.798 Transport SGL Data Block: Not Supported 00:16:53.798 Replay Protected Memory Block: Not Supported 00:16:53.798 00:16:53.798 Firmware Slot Information 00:16:53.798 ========================= 00:16:53.798 Active slot: 0 00:16:53.798 00:16:53.798 00:16:53.798 Error Log 00:16:53.798 ========= 00:16:53.798 00:16:53.798 Active Namespaces 00:16:53.798 ================= 00:16:53.798 Discovery Log Page 00:16:53.798 ================== 00:16:53.798 Generation Counter: 2 00:16:53.798 Number of Records: 2 00:16:53.798 Record Format: 0 00:16:53.798 00:16:53.798 Discovery Log Entry 0 00:16:53.798 ---------------------- 00:16:53.798 Transport Type: 3 (TCP) 00:16:53.798 Address Family: 1 (IPv4) 00:16:53.798 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:53.798 Entry Flags: 00:16:53.798 Duplicate Returned Information: 1 00:16:53.798 Explicit Persistent Connection Support for Discovery: 1 00:16:53.798 Transport Requirements: 00:16:53.798 Secure Channel: Not Required 00:16:53.798 Port ID: 0 (0x0000) 00:16:53.798 Controller ID: 65535 (0xffff) 00:16:53.798 Admin Max SQ Size: 128 00:16:53.798 Transport Service Identifier: 4420 00:16:53.798 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:53.798 Transport Address: 10.0.0.2 00:16:53.798 Discovery Log Entry 1 00:16:53.798 ---------------------- 00:16:53.798 Transport Type: 3 (TCP) 00:16:53.798 Address Family: 1 (IPv4) 00:16:53.798 Subsystem Type: 2 (NVM Subsystem) 00:16:53.798 Entry Flags: 00:16:53.798 Duplicate Returned Information: 0 00:16:53.798 Explicit Persistent Connection Support for Discovery: 0 00:16:53.798 Transport Requirements: 00:16:53.798 Secure Channel: Not Required 00:16:53.798 Port ID: 0 (0x0000) 00:16:53.798 Controller ID: 65535 (0xffff) 00:16:53.798 Admin Max SQ Size: 128 00:16:53.798 Transport Service Identifier: 4420 00:16:53.798 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:53.798 Transport Address: 10.0.0.2 [2024-04-19 03:30:31.256657] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:53.798 [2024-04-19 03:30:31.256681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.798 [2024-04-19 03:30:31.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.798 [2024-04-19 03:30:31.256704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.798 [2024-04-19 03:30:31.256717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.798 [2024-04-19 03:30:31.256731] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.256739] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.256746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5cd00) 00:16:53.798 [2024-04-19 03:30:31.256757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.798 [2024-04-19 03:30:31.256797] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc2e0, cid 3, qid 0 00:16:53.798 [2024-04-19 03:30:31.256933] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.798 [2024-04-19 03:30:31.256949] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.798 [2024-04-19 03:30:31.256956] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.256963] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc2e0) on tqpair=0x1a5cd00 00:16:53.798 [2024-04-19 03:30:31.256976] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.256984] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.256991] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5cd00) 00:16:53.798 [2024-04-19 03:30:31.257002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.798 [2024-04-19 03:30:31.257029] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc2e0, cid 3, qid 0 00:16:53.798 [2024-04-19 03:30:31.257165] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.798 [2024-04-19 03:30:31.257178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.798 [2024-04-19 03:30:31.257185] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.257192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc2e0) on tqpair=0x1a5cd00 00:16:53.798 [2024-04-19 03:30:31.257202] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:53.798 [2024-04-19 03:30:31.257211] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:53.798 [2024-04-19 03:30:31.257227] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.257236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.798 [2024-04-19 03:30:31.257244] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5cd00) 00:16:53.798 [2024-04-19 03:30:31.257254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.257275] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc2e0, cid 3, qid 0 00:16:53.799 [2024-04-19 03:30:31.261393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.261412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.261419] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.261426] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc2e0) on tqpair=0x1a5cd00 00:16:53.799 [2024-04-19 03:30:31.261445] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.261455] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.261462] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5cd00) 00:16:53.799 [2024-04-19 03:30:31.261473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.261495] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abc2e0, cid 3, qid 0 00:16:53.799 [2024-04-19 03:30:31.261632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.261645] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.261652] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.261659] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1abc2e0) on tqpair=0x1a5cd00 00:16:53.799 [2024-04-19 03:30:31.261673] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:16:53.799 00:16:53.799 03:30:31 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:53.799 [2024-04-19 03:30:31.296782] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:53.799 [2024-04-19 03:30:31.296825] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284126 ] 00:16:53.799 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.799 [2024-04-19 03:30:31.332120] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:53.799 [2024-04-19 03:30:31.332174] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:53.799 [2024-04-19 03:30:31.332184] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:53.799 [2024-04-19 03:30:31.332198] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:53.799 [2024-04-19 03:30:31.332209] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:53.799 [2024-04-19 03:30:31.332475] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:53.799 [2024-04-19 03:30:31.332516] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5a4d00 0 00:16:53.799 [2024-04-19 03:30:31.339408] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:53.799 [2024-04-19 03:30:31.339427] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:53.799 [2024-04-19 03:30:31.339434] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:53.799 [2024-04-19 03:30:31.339440] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:53.799 [2024-04-19 03:30:31.339492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.339504] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.339511] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.799 [2024-04-19 03:30:31.339525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:53.799 [2024-04-19 03:30:31.339552] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.799 [2024-04-19 03:30:31.346393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.346411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.346419] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346426] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.799 [2024-04-19 03:30:31.346444] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:53.799 [2024-04-19 03:30:31.346455] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:53.799 [2024-04-19 03:30:31.346465] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:53.799 [2024-04-19 03:30:31.346486] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346496] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346502] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.799 [2024-04-19 03:30:31.346514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.346538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.799 [2024-04-19 03:30:31.346681] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.346693] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.346700] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346707] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.799 [2024-04-19 03:30:31.346715] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:53.799 [2024-04-19 03:30:31.346728] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:53.799 [2024-04-19 03:30:31.346741] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346748] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.799 [2024-04-19 03:30:31.346766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.346787] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.799 [2024-04-19 03:30:31.346943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.346955] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.346962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.346968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.799 [2024-04-19 03:30:31.346977] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:53.799 [2024-04-19 03:30:31.346990] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:53.799 [2024-04-19 03:30:31.347002] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347010] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347016] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.799 [2024-04-19 03:30:31.347027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.347048] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.799 [2024-04-19 03:30:31.347187] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.347200] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.347206] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347213] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.799 [2024-04-19 03:30:31.347221] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:53.799 [2024-04-19 03:30:31.347237] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347245] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347252] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.799 [2024-04-19 03:30:31.347266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.347288] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.799 [2024-04-19 03:30:31.347426] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.347442] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.799 [2024-04-19 03:30:31.347449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347455] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.799 [2024-04-19 03:30:31.347463] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:53.799 [2024-04-19 03:30:31.347471] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:53.799 [2024-04-19 03:30:31.347485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:53.799 [2024-04-19 03:30:31.347594] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:53.799 [2024-04-19 03:30:31.347602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:53.799 [2024-04-19 03:30:31.347614] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347621] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.799 [2024-04-19 03:30:31.347628] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.799 [2024-04-19 03:30:31.347638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.799 [2024-04-19 03:30:31.347660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.799 [2024-04-19 03:30:31.347781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.799 [2024-04-19 03:30:31.347793] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.800 [2024-04-19 03:30:31.347800] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.347806] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.800 [2024-04-19 03:30:31.347814] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:53.800 [2024-04-19 03:30:31.347831] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.347840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.347847] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.800 [2024-04-19 03:30:31.347857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.800 [2024-04-19 03:30:31.347878] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.800 [2024-04-19 03:30:31.348031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:53.800 [2024-04-19 03:30:31.348043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:53.800 [2024-04-19 03:30:31.348050] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.348056] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:53.800 [2024-04-19 03:30:31.348064] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:53.800 [2024-04-19 03:30:31.348072] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:53.800 [2024-04-19 03:30:31.348089] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:53.800 [2024-04-19 03:30:31.348104] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:53.800 [2024-04-19 03:30:31.348120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.348128] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:53.800 [2024-04-19 03:30:31.348139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.800 [2024-04-19 03:30:31.348161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:53.800 [2024-04-19 03:30:31.348327] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:53.800 [2024-04-19 03:30:31.348339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:53.800 [2024-04-19 03:30:31.348346] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.348352] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=4096, cccid=0 00:16:53.800 [2024-04-19 03:30:31.348360] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x603ec0) on tqpair(0x5a4d00): expected_datao=0, payload_size=4096 00:16:53.800 [2024-04-19 03:30:31.348367] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.348390] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:53.800 [2024-04-19 03:30:31.348401] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392410] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.061 [2024-04-19 03:30:31.392427] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.061 [2024-04-19 03:30:31.392434] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392441] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:54.061 [2024-04-19 03:30:31.392452] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:54.061 [2024-04-19 03:30:31.392461] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:54.061 [2024-04-19 03:30:31.392469] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:54.061 [2024-04-19 03:30:31.392475] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:54.061 [2024-04-19 03:30:31.392483] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:54.061 [2024-04-19 03:30:31.392490] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.392505] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.392532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392540] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.392558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.061 [2024-04-19 03:30:31.392582] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:54.061 [2024-04-19 03:30:31.392723] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.061 [2024-04-19 03:30:31.392738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.061 [2024-04-19 03:30:31.392745] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392752] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x603ec0) on tqpair=0x5a4d00 00:16:54.061 [2024-04-19 03:30:31.392766] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392775] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392782] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.392792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.061 [2024-04-19 03:30:31.392803] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392810] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392816] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.392825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.061 [2024-04-19 03:30:31.392835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392842] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.392857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.061 [2024-04-19 03:30:31.392867] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392874] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.392906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.061 [2024-04-19 03:30:31.392915] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.392934] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.392946] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.392953] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.392963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.061 [2024-04-19 03:30:31.392986] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x603ec0, cid 0, qid 0 00:16:54.061 [2024-04-19 03:30:31.393012] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604020, cid 1, qid 0 00:16:54.061 [2024-04-19 03:30:31.393021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604180, cid 2, qid 0 00:16:54.061 [2024-04-19 03:30:31.393028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.061 [2024-04-19 03:30:31.393036] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.061 [2024-04-19 03:30:31.393190] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.061 [2024-04-19 03:30:31.393202] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.061 [2024-04-19 03:30:31.393209] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.393215] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.061 [2024-04-19 03:30:31.393224] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:54.061 [2024-04-19 03:30:31.393233] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.393250] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.393265] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.393276] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.393284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.393290] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.393317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.061 [2024-04-19 03:30:31.393338] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.061 [2024-04-19 03:30:31.393488] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.061 [2024-04-19 03:30:31.393502] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.061 [2024-04-19 03:30:31.393509] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.393516] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.061 [2024-04-19 03:30:31.393570] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.393588] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:54.061 [2024-04-19 03:30:31.393602] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.393610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.061 [2024-04-19 03:30:31.393621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.061 [2024-04-19 03:30:31.393642] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.061 [2024-04-19 03:30:31.393790] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.061 [2024-04-19 03:30:31.393806] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.061 [2024-04-19 03:30:31.393813] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.061 [2024-04-19 03:30:31.393819] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=4096, cccid=4 00:16:54.061 [2024-04-19 03:30:31.393827] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x604440) on tqpair(0x5a4d00): expected_datao=0, payload_size=4096 00:16:54.061 [2024-04-19 03:30:31.393835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.393845] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.393852] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.393875] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.393886] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.393893] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.393899] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.393913] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:54.062 [2024-04-19 03:30:31.393931] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.393949] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.393962] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.393970] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.393984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.394007] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.062 [2024-04-19 03:30:31.394155] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.062 [2024-04-19 03:30:31.394167] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.062 [2024-04-19 03:30:31.394174] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394180] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=4096, cccid=4 00:16:54.062 [2024-04-19 03:30:31.394187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x604440) on tqpair(0x5a4d00): expected_datao=0, payload_size=4096 00:16:54.062 [2024-04-19 03:30:31.394195] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394205] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394212] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394237] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.394248] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.394254] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394261] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.394281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394300] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394314] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394322] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.394333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.394353] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.062 [2024-04-19 03:30:31.394500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.062 [2024-04-19 03:30:31.394513] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.062 [2024-04-19 03:30:31.394520] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394526] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=4096, cccid=4 00:16:54.062 [2024-04-19 03:30:31.394534] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x604440) on tqpair(0x5a4d00): expected_datao=0, payload_size=4096 00:16:54.062 [2024-04-19 03:30:31.394541] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394551] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394559] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394590] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.394601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.394608] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394614] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.394627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394642] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394662] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394673] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394691] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:54.062 [2024-04-19 03:30:31.394699] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:54.062 [2024-04-19 03:30:31.394707] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:54.062 [2024-04-19 03:30:31.394726] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394735] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.394746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.394757] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.394786] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.394795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.062 [2024-04-19 03:30:31.394820] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.062 [2024-04-19 03:30:31.394846] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6045a0, cid 5, qid 0 00:16:54.062 [2024-04-19 03:30:31.394987] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.395003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.395010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.395028] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.395037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.395044] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6045a0) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.395066] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395075] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.395086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.395107] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6045a0, cid 5, qid 0 00:16:54.062 [2024-04-19 03:30:31.395236] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.395251] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.395258] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395265] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6045a0) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.395281] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395290] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.395301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.395325] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6045a0, cid 5, qid 0 00:16:54.062 [2024-04-19 03:30:31.395459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.395475] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.395482] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395489] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6045a0) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.395505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395514] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.395525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.395546] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6045a0, cid 5, qid 0 00:16:54.062 [2024-04-19 03:30:31.395697] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.062 [2024-04-19 03:30:31.395710] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.062 [2024-04-19 03:30:31.395716] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395723] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6045a0) on tqpair=0x5a4d00 00:16:54.062 [2024-04-19 03:30:31.395743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395753] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.395764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.395776] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395783] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a4d00) 00:16:54.062 [2024-04-19 03:30:31.395793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.062 [2024-04-19 03:30:31.395805] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.062 [2024-04-19 03:30:31.395827] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5a4d00) 00:16:54.063 [2024-04-19 03:30:31.395837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.063 [2024-04-19 03:30:31.395849] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.395856] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5a4d00) 00:16:54.063 [2024-04-19 03:30:31.395866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.063 [2024-04-19 03:30:31.395887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6045a0, cid 5, qid 0 00:16:54.063 [2024-04-19 03:30:31.395912] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604440, cid 4, qid 0 00:16:54.063 [2024-04-19 03:30:31.395920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604700, cid 6, qid 0 00:16:54.063 [2024-04-19 03:30:31.395928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604860, cid 7, qid 0 00:16:54.063 [2024-04-19 03:30:31.396137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.063 [2024-04-19 03:30:31.396153] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.063 [2024-04-19 03:30:31.396160] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396166] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=8192, cccid=5 00:16:54.063 [2024-04-19 03:30:31.396177] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6045a0) on tqpair(0x5a4d00): expected_datao=0, payload_size=8192 00:16:54.063 [2024-04-19 03:30:31.396185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396260] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396270] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396279] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.063 [2024-04-19 03:30:31.396288] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.063 [2024-04-19 03:30:31.396294] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396300] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=512, cccid=4 00:16:54.063 [2024-04-19 03:30:31.396308] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x604440) on tqpair(0x5a4d00): expected_datao=0, payload_size=512 00:16:54.063 [2024-04-19 03:30:31.396315] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396324] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396331] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396339] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.063 [2024-04-19 03:30:31.396348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.063 [2024-04-19 03:30:31.396355] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.396361] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=512, cccid=6 00:16:54.063 [2024-04-19 03:30:31.396368] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x604700) on tqpair(0x5a4d00): expected_datao=0, payload_size=512 00:16:54.063 [2024-04-19 03:30:31.396375] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400409] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400421] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400429] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:54.063 [2024-04-19 03:30:31.400438] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:54.063 [2024-04-19 03:30:31.400444] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400450] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a4d00): datao=0, datal=4096, cccid=7 00:16:54.063 [2024-04-19 03:30:31.400458] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x604860) on tqpair(0x5a4d00): expected_datao=0, payload_size=4096 00:16:54.063 [2024-04-19 03:30:31.400465] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400474] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400481] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400493] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.063 [2024-04-19 03:30:31.400502] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.063 [2024-04-19 03:30:31.400508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400514] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6045a0) on tqpair=0x5a4d00 00:16:54.063 [2024-04-19 03:30:31.400534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.063 [2024-04-19 03:30:31.400546] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.063 [2024-04-19 03:30:31.400552] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400559] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604440) on tqpair=0x5a4d00 00:16:54.063 [2024-04-19 03:30:31.400572] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.063 [2024-04-19 03:30:31.400583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.063 [2024-04-19 03:30:31.400589] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400599] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604700) on tqpair=0x5a4d00 00:16:54.063 [2024-04-19 03:30:31.400610] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.063 [2024-04-19 03:30:31.400619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.063 [2024-04-19 03:30:31.400626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.063 [2024-04-19 03:30:31.400632] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604860) on tqpair=0x5a4d00 00:16:54.063 ===================================================== 00:16:54.063 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.063 ===================================================== 00:16:54.063 Controller Capabilities/Features 00:16:54.063 ================================ 00:16:54.063 Vendor ID: 8086 00:16:54.063 Subsystem Vendor ID: 8086 00:16:54.063 Serial Number: SPDK00000000000001 00:16:54.063 Model Number: SPDK bdev Controller 00:16:54.063 Firmware Version: 24.05 00:16:54.063 Recommended Arb Burst: 6 00:16:54.063 IEEE OUI Identifier: e4 d2 5c 00:16:54.063 Multi-path I/O 00:16:54.063 May have multiple subsystem ports: Yes 00:16:54.063 May have multiple controllers: Yes 00:16:54.063 Associated with SR-IOV VF: No 00:16:54.063 Max Data Transfer Size: 131072 00:16:54.063 Max Number of Namespaces: 32 00:16:54.063 Max Number of I/O Queues: 127 00:16:54.063 NVMe Specification Version (VS): 1.3 00:16:54.063 NVMe Specification Version (Identify): 1.3 00:16:54.063 Maximum Queue Entries: 128 00:16:54.063 Contiguous Queues Required: Yes 00:16:54.063 Arbitration Mechanisms Supported 00:16:54.063 Weighted Round Robin: Not Supported 00:16:54.063 Vendor Specific: Not Supported 00:16:54.063 Reset Timeout: 15000 ms 00:16:54.063 Doorbell Stride: 4 bytes 00:16:54.063 NVM Subsystem Reset: Not Supported 00:16:54.063 Command Sets Supported 00:16:54.063 NVM Command Set: Supported 00:16:54.063 Boot Partition: Not Supported 00:16:54.063 Memory Page Size Minimum: 4096 bytes 00:16:54.063 Memory Page Size Maximum: 4096 bytes 00:16:54.063 Persistent Memory Region: Not Supported 00:16:54.063 Optional Asynchronous Events Supported 00:16:54.063 Namespace Attribute Notices: Supported 00:16:54.063 Firmware Activation Notices: Not Supported 00:16:54.063 ANA Change Notices: Not Supported 00:16:54.063 PLE Aggregate Log Change Notices: Not Supported 00:16:54.063 LBA Status Info Alert Notices: Not Supported 00:16:54.063 EGE Aggregate Log Change Notices: Not Supported 00:16:54.063 Normal NVM Subsystem Shutdown event: Not Supported 00:16:54.063 Zone Descriptor Change Notices: Not Supported 00:16:54.063 Discovery Log Change Notices: Not Supported 00:16:54.063 Controller Attributes 00:16:54.063 128-bit Host Identifier: Supported 00:16:54.063 Non-Operational Permissive Mode: Not Supported 00:16:54.063 NVM Sets: Not Supported 00:16:54.063 Read Recovery Levels: Not Supported 00:16:54.063 Endurance Groups: Not Supported 00:16:54.063 Predictable Latency Mode: Not Supported 00:16:54.063 Traffic Based Keep ALive: Not Supported 00:16:54.063 Namespace Granularity: Not Supported 00:16:54.063 SQ Associations: Not Supported 00:16:54.063 UUID List: Not Supported 00:16:54.063 Multi-Domain Subsystem: Not Supported 00:16:54.063 Fixed Capacity Management: Not Supported 00:16:54.063 Variable Capacity Management: Not Supported 00:16:54.063 Delete Endurance Group: Not Supported 00:16:54.063 Delete NVM Set: Not Supported 00:16:54.063 Extended LBA Formats Supported: Not Supported 00:16:54.063 Flexible Data Placement Supported: Not Supported 00:16:54.063 00:16:54.063 Controller Memory Buffer Support 00:16:54.063 ================================ 00:16:54.063 Supported: No 00:16:54.063 00:16:54.063 Persistent Memory Region Support 00:16:54.063 ================================ 00:16:54.063 Supported: No 00:16:54.063 00:16:54.063 Admin Command Set Attributes 00:16:54.063 ============================ 00:16:54.063 Security Send/Receive: Not Supported 00:16:54.063 Format NVM: Not Supported 00:16:54.063 Firmware Activate/Download: Not Supported 00:16:54.063 Namespace Management: Not Supported 00:16:54.063 Device Self-Test: Not Supported 00:16:54.063 Directives: Not Supported 00:16:54.063 NVMe-MI: Not Supported 00:16:54.063 Virtualization Management: Not Supported 00:16:54.063 Doorbell Buffer Config: Not Supported 00:16:54.063 Get LBA Status Capability: Not Supported 00:16:54.063 Command & Feature Lockdown Capability: Not Supported 00:16:54.063 Abort Command Limit: 4 00:16:54.063 Async Event Request Limit: 4 00:16:54.063 Number of Firmware Slots: N/A 00:16:54.063 Firmware Slot 1 Read-Only: N/A 00:16:54.063 Firmware Activation Without Reset: N/A 00:16:54.063 Multiple Update Detection Support: N/A 00:16:54.063 Firmware Update Granularity: No Information Provided 00:16:54.063 Per-Namespace SMART Log: No 00:16:54.063 Asymmetric Namespace Access Log Page: Not Supported 00:16:54.064 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:54.064 Command Effects Log Page: Supported 00:16:54.064 Get Log Page Extended Data: Supported 00:16:54.064 Telemetry Log Pages: Not Supported 00:16:54.064 Persistent Event Log Pages: Not Supported 00:16:54.064 Supported Log Pages Log Page: May Support 00:16:54.064 Commands Supported & Effects Log Page: Not Supported 00:16:54.064 Feature Identifiers & Effects Log Page:May Support 00:16:54.064 NVMe-MI Commands & Effects Log Page: May Support 00:16:54.064 Data Area 4 for Telemetry Log: Not Supported 00:16:54.064 Error Log Page Entries Supported: 128 00:16:54.064 Keep Alive: Supported 00:16:54.064 Keep Alive Granularity: 10000 ms 00:16:54.064 00:16:54.064 NVM Command Set Attributes 00:16:54.064 ========================== 00:16:54.064 Submission Queue Entry Size 00:16:54.064 Max: 64 00:16:54.064 Min: 64 00:16:54.064 Completion Queue Entry Size 00:16:54.064 Max: 16 00:16:54.064 Min: 16 00:16:54.064 Number of Namespaces: 32 00:16:54.064 Compare Command: Supported 00:16:54.064 Write Uncorrectable Command: Not Supported 00:16:54.064 Dataset Management Command: Supported 00:16:54.064 Write Zeroes Command: Supported 00:16:54.064 Set Features Save Field: Not Supported 00:16:54.064 Reservations: Supported 00:16:54.064 Timestamp: Not Supported 00:16:54.064 Copy: Supported 00:16:54.064 Volatile Write Cache: Present 00:16:54.064 Atomic Write Unit (Normal): 1 00:16:54.064 Atomic Write Unit (PFail): 1 00:16:54.064 Atomic Compare & Write Unit: 1 00:16:54.064 Fused Compare & Write: Supported 00:16:54.064 Scatter-Gather List 00:16:54.064 SGL Command Set: Supported 00:16:54.064 SGL Keyed: Supported 00:16:54.064 SGL Bit Bucket Descriptor: Not Supported 00:16:54.064 SGL Metadata Pointer: Not Supported 00:16:54.064 Oversized SGL: Not Supported 00:16:54.064 SGL Metadata Address: Not Supported 00:16:54.064 SGL Offset: Supported 00:16:54.064 Transport SGL Data Block: Not Supported 00:16:54.064 Replay Protected Memory Block: Not Supported 00:16:54.064 00:16:54.064 Firmware Slot Information 00:16:54.064 ========================= 00:16:54.064 Active slot: 1 00:16:54.064 Slot 1 Firmware Revision: 24.05 00:16:54.064 00:16:54.064 00:16:54.064 Commands Supported and Effects 00:16:54.064 ============================== 00:16:54.064 Admin Commands 00:16:54.064 -------------- 00:16:54.064 Get Log Page (02h): Supported 00:16:54.064 Identify (06h): Supported 00:16:54.064 Abort (08h): Supported 00:16:54.064 Set Features (09h): Supported 00:16:54.064 Get Features (0Ah): Supported 00:16:54.064 Asynchronous Event Request (0Ch): Supported 00:16:54.064 Keep Alive (18h): Supported 00:16:54.064 I/O Commands 00:16:54.064 ------------ 00:16:54.064 Flush (00h): Supported LBA-Change 00:16:54.064 Write (01h): Supported LBA-Change 00:16:54.064 Read (02h): Supported 00:16:54.064 Compare (05h): Supported 00:16:54.064 Write Zeroes (08h): Supported LBA-Change 00:16:54.064 Dataset Management (09h): Supported LBA-Change 00:16:54.064 Copy (19h): Supported LBA-Change 00:16:54.064 Unknown (79h): Supported LBA-Change 00:16:54.064 Unknown (7Ah): Supported 00:16:54.064 00:16:54.064 Error Log 00:16:54.064 ========= 00:16:54.064 00:16:54.064 Arbitration 00:16:54.064 =========== 00:16:54.064 Arbitration Burst: 1 00:16:54.064 00:16:54.064 Power Management 00:16:54.064 ================ 00:16:54.064 Number of Power States: 1 00:16:54.064 Current Power State: Power State #0 00:16:54.064 Power State #0: 00:16:54.064 Max Power: 0.00 W 00:16:54.064 Non-Operational State: Operational 00:16:54.064 Entry Latency: Not Reported 00:16:54.064 Exit Latency: Not Reported 00:16:54.064 Relative Read Throughput: 0 00:16:54.064 Relative Read Latency: 0 00:16:54.064 Relative Write Throughput: 0 00:16:54.064 Relative Write Latency: 0 00:16:54.064 Idle Power: Not Reported 00:16:54.064 Active Power: Not Reported 00:16:54.064 Non-Operational Permissive Mode: Not Supported 00:16:54.064 00:16:54.064 Health Information 00:16:54.064 ================== 00:16:54.064 Critical Warnings: 00:16:54.064 Available Spare Space: OK 00:16:54.064 Temperature: OK 00:16:54.064 Device Reliability: OK 00:16:54.064 Read Only: No 00:16:54.064 Volatile Memory Backup: OK 00:16:54.064 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:54.064 Temperature Threshold: [2024-04-19 03:30:31.400787] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.400799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5a4d00) 00:16:54.064 [2024-04-19 03:30:31.400810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.064 [2024-04-19 03:30:31.400833] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x604860, cid 7, qid 0 00:16:54.064 [2024-04-19 03:30:31.401024] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.064 [2024-04-19 03:30:31.401036] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.064 [2024-04-19 03:30:31.401043] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x604860) on tqpair=0x5a4d00 00:16:54.064 [2024-04-19 03:30:31.401091] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:54.064 [2024-04-19 03:30:31.401111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.064 [2024-04-19 03:30:31.401123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.064 [2024-04-19 03:30:31.401133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.064 [2024-04-19 03:30:31.401159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.064 [2024-04-19 03:30:31.401171] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401179] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401185] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.064 [2024-04-19 03:30:31.401195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.064 [2024-04-19 03:30:31.401217] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.064 [2024-04-19 03:30:31.401407] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.064 [2024-04-19 03:30:31.401421] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.064 [2024-04-19 03:30:31.401428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401434] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.064 [2024-04-19 03:30:31.401446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401453] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401460] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.064 [2024-04-19 03:30:31.401471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.064 [2024-04-19 03:30:31.401497] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.064 [2024-04-19 03:30:31.401642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.064 [2024-04-19 03:30:31.401654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.064 [2024-04-19 03:30:31.401661] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.064 [2024-04-19 03:30:31.401680] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:54.064 [2024-04-19 03:30:31.401688] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:54.064 [2024-04-19 03:30:31.401703] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401719] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.064 [2024-04-19 03:30:31.401729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.064 [2024-04-19 03:30:31.401750] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.064 [2024-04-19 03:30:31.401904] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.064 [2024-04-19 03:30:31.401920] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.064 [2024-04-19 03:30:31.401927] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401933] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.064 [2024-04-19 03:30:31.401951] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.401967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.064 [2024-04-19 03:30:31.401978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.064 [2024-04-19 03:30:31.401998] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.064 [2024-04-19 03:30:31.402126] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.064 [2024-04-19 03:30:31.402141] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.064 [2024-04-19 03:30:31.402147] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.402154] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.064 [2024-04-19 03:30:31.402171] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.064 [2024-04-19 03:30:31.402180] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402187] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.402197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.402218] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.402339] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.402351] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.402358] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402364] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.402386] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402404] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.402415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.402436] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.402610] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.402629] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.402636] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402643] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.402660] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402669] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402676] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.402686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.402723] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.402852] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.402867] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.402874] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402881] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.402898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.402914] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.402925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.402945] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.403099] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.403114] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.403120] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.403143] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.403170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.403190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.403365] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.403379] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.403393] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.403417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403427] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403434] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.403444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.403465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.403635] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.403647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.403657] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.403682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403691] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.403708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.403728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.403864] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.403879] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.403885] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.403909] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403918] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.403925] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.403935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.403955] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.404099] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.404111] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.404118] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.404125] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.404141] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.404150] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.404157] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.404167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.065 [2024-04-19 03:30:31.404187] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.065 [2024-04-19 03:30:31.404321] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.065 [2024-04-19 03:30:31.404336] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.065 [2024-04-19 03:30:31.404343] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.404349] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.065 [2024-04-19 03:30:31.404366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.404375] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:54.065 [2024-04-19 03:30:31.408393] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a4d00) 00:16:54.065 [2024-04-19 03:30:31.408419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.066 [2024-04-19 03:30:31.408454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6042e0, cid 3, qid 0 00:16:54.066 [2024-04-19 03:30:31.408595] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:54.066 [2024-04-19 03:30:31.408611] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:54.066 [2024-04-19 03:30:31.408618] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:54.066 [2024-04-19 03:30:31.408629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6042e0) on tqpair=0x5a4d00 00:16:54.066 [2024-04-19 03:30:31.408643] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:16:54.066 0 Kelvin (-273 Celsius) 00:16:54.066 Available Spare: 0% 00:16:54.066 Available Spare Threshold: 0% 00:16:54.066 Life Percentage Used: 0% 00:16:54.066 Data Units Read: 0 00:16:54.066 Data Units Written: 0 00:16:54.066 Host Read Commands: 0 00:16:54.066 Host Write Commands: 0 00:16:54.066 Controller Busy Time: 0 minutes 00:16:54.066 Power Cycles: 0 00:16:54.066 Power On Hours: 0 hours 00:16:54.066 Unsafe Shutdowns: 0 00:16:54.066 Unrecoverable Media Errors: 0 00:16:54.066 Lifetime Error Log Entries: 0 00:16:54.066 Warning Temperature Time: 0 minutes 00:16:54.066 Critical Temperature Time: 0 minutes 00:16:54.066 00:16:54.066 Number of Queues 00:16:54.066 ================ 00:16:54.066 Number of I/O Submission Queues: 127 00:16:54.066 Number of I/O Completion Queues: 127 00:16:54.066 00:16:54.066 Active Namespaces 00:16:54.066 ================= 00:16:54.066 Namespace ID:1 00:16:54.066 Error Recovery Timeout: Unlimited 00:16:54.066 Command Set Identifier: NVM (00h) 00:16:54.066 Deallocate: Supported 00:16:54.066 Deallocated/Unwritten Error: Not Supported 00:16:54.066 Deallocated Read Value: Unknown 00:16:54.066 Deallocate in Write Zeroes: Not Supported 00:16:54.066 Deallocated Guard Field: 0xFFFF 00:16:54.066 Flush: Supported 00:16:54.066 Reservation: Supported 00:16:54.066 Namespace Sharing Capabilities: Multiple Controllers 00:16:54.066 Size (in LBAs): 131072 (0GiB) 00:16:54.066 Capacity (in LBAs): 131072 (0GiB) 00:16:54.066 Utilization (in LBAs): 131072 (0GiB) 00:16:54.066 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:54.066 EUI64: ABCDEF0123456789 00:16:54.066 UUID: 7abe3157-c9b9-45a8-b2df-1e25e6d32634 00:16:54.066 Thin Provisioning: Not Supported 00:16:54.066 Per-NS Atomic Units: Yes 00:16:54.066 Atomic Boundary Size (Normal): 0 00:16:54.066 Atomic Boundary Size (PFail): 0 00:16:54.066 Atomic Boundary Offset: 0 00:16:54.066 Maximum Single Source Range Length: 65535 00:16:54.066 Maximum Copy Length: 65535 00:16:54.066 Maximum Source Range Count: 1 00:16:54.066 NGUID/EUI64 Never Reused: No 00:16:54.066 Namespace Write Protected: No 00:16:54.066 Number of LBA Formats: 1 00:16:54.066 Current LBA Format: LBA Format #00 00:16:54.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:54.066 00:16:54.066 03:30:31 -- host/identify.sh@51 -- # sync 00:16:54.066 03:30:31 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.066 03:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.066 03:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:54.066 03:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.066 03:30:31 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:54.066 03:30:31 -- host/identify.sh@56 -- # nvmftestfini 00:16:54.066 03:30:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:54.066 03:30:31 -- nvmf/common.sh@117 -- # sync 00:16:54.066 03:30:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.066 03:30:31 -- nvmf/common.sh@120 -- # set +e 00:16:54.066 03:30:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.066 03:30:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.066 rmmod nvme_tcp 00:16:54.066 rmmod nvme_fabrics 00:16:54.066 rmmod nvme_keyring 00:16:54.066 03:30:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.066 03:30:31 -- nvmf/common.sh@124 -- # set -e 00:16:54.066 03:30:31 -- nvmf/common.sh@125 -- # return 0 00:16:54.066 03:30:31 -- nvmf/common.sh@478 -- # '[' -n 283978 ']' 00:16:54.066 03:30:31 -- nvmf/common.sh@479 -- # killprocess 283978 00:16:54.066 03:30:31 -- common/autotest_common.sh@936 -- # '[' -z 283978 ']' 00:16:54.066 03:30:31 -- common/autotest_common.sh@940 -- # kill -0 283978 00:16:54.066 03:30:31 -- common/autotest_common.sh@941 -- # uname 00:16:54.066 03:30:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.066 03:30:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 283978 00:16:54.066 03:30:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.066 03:30:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.066 03:30:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 283978' 00:16:54.066 killing process with pid 283978 00:16:54.066 03:30:31 -- common/autotest_common.sh@955 -- # kill 283978 00:16:54.066 [2024-04-19 03:30:31.516712] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:54.066 03:30:31 -- common/autotest_common.sh@960 -- # wait 283978 00:16:54.325 03:30:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:54.325 03:30:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:54.325 03:30:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:54.326 03:30:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.326 03:30:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.326 03:30:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.326 03:30:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.326 03:30:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.859 03:30:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.859 00:16:56.859 real 0m5.505s 00:16:56.859 user 0m4.452s 00:16:56.859 sys 0m1.907s 00:16:56.859 03:30:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.859 03:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:56.859 ************************************ 00:16:56.859 END TEST nvmf_identify 00:16:56.859 ************************************ 00:16:56.859 03:30:33 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:56.859 03:30:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.859 03:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.859 03:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:56.859 ************************************ 00:16:56.859 START TEST nvmf_perf 00:16:56.859 ************************************ 00:16:56.859 03:30:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:56.859 * Looking for test storage... 00:16:56.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:56.859 03:30:34 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.859 03:30:34 -- nvmf/common.sh@7 -- # uname -s 00:16:56.859 03:30:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.859 03:30:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.859 03:30:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.859 03:30:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.859 03:30:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.859 03:30:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.859 03:30:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.859 03:30:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.859 03:30:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.859 03:30:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.859 03:30:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.859 03:30:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.859 03:30:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.860 03:30:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.860 03:30:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.860 03:30:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.860 03:30:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.860 03:30:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.860 03:30:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.860 03:30:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.860 03:30:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.860 03:30:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.860 03:30:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.860 03:30:34 -- paths/export.sh@5 -- # export PATH 00:16:56.860 03:30:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.860 03:30:34 -- nvmf/common.sh@47 -- # : 0 00:16:56.860 03:30:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.860 03:30:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.860 03:30:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.860 03:30:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.860 03:30:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.860 03:30:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.860 03:30:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.860 03:30:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.860 03:30:34 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:56.860 03:30:34 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:56.860 03:30:34 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.860 03:30:34 -- host/perf.sh@17 -- # nvmftestinit 00:16:56.860 03:30:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:56.860 03:30:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.860 03:30:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:56.860 03:30:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:56.860 03:30:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:56.860 03:30:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.860 03:30:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.860 03:30:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.860 03:30:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:56.860 03:30:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:56.860 03:30:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.860 03:30:34 -- common/autotest_common.sh@10 -- # set +x 00:16:58.764 03:30:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:58.764 03:30:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.764 03:30:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.764 03:30:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.764 03:30:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.764 03:30:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.764 03:30:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.764 03:30:36 -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.764 03:30:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.764 03:30:36 -- nvmf/common.sh@296 -- # e810=() 00:16:58.764 03:30:36 -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.764 03:30:36 -- nvmf/common.sh@297 -- # x722=() 00:16:58.764 03:30:36 -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.764 03:30:36 -- nvmf/common.sh@298 -- # mlx=() 00:16:58.764 03:30:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.764 03:30:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.764 03:30:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.764 03:30:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.764 03:30:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.764 03:30:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.764 03:30:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:58.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:58.764 03:30:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.764 03:30:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:58.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:58.764 03:30:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.764 03:30:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.764 03:30:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.764 03:30:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:58.764 03:30:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.764 03:30:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:58.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:58.764 03:30:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.764 03:30:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.764 03:30:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.764 03:30:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:58.764 03:30:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.764 03:30:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:58.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:58.764 03:30:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.764 03:30:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:58.764 03:30:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:58.764 03:30:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:58.764 03:30:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.764 03:30:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.764 03:30:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.764 03:30:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.764 03:30:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.764 03:30:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.764 03:30:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.764 03:30:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.764 03:30:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.764 03:30:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.764 03:30:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.764 03:30:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.764 03:30:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.764 03:30:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.764 03:30:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.764 03:30:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.764 03:30:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.764 03:30:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.764 03:30:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.764 03:30:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:16:58.764 00:16:58.764 --- 10.0.0.2 ping statistics --- 00:16:58.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.764 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:16:58.764 03:30:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:16:58.764 00:16:58.764 --- 10.0.0.1 ping statistics --- 00:16:58.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.764 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:58.764 03:30:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.764 03:30:36 -- nvmf/common.sh@411 -- # return 0 00:16:58.764 03:30:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:58.764 03:30:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.764 03:30:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:58.764 03:30:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.764 03:30:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:58.764 03:30:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:58.764 03:30:36 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:58.764 03:30:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:58.764 03:30:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:58.764 03:30:36 -- common/autotest_common.sh@10 -- # set +x 00:16:58.764 03:30:36 -- nvmf/common.sh@470 -- # nvmfpid=286068 00:16:58.764 03:30:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.764 03:30:36 -- nvmf/common.sh@471 -- # waitforlisten 286068 00:16:58.764 03:30:36 -- common/autotest_common.sh@817 -- # '[' -z 286068 ']' 00:16:58.764 03:30:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.764 03:30:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.764 03:30:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.764 03:30:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.765 03:30:36 -- common/autotest_common.sh@10 -- # set +x 00:16:58.765 [2024-04-19 03:30:36.288316] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:16:58.765 [2024-04-19 03:30:36.288414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.765 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.022 [2024-04-19 03:30:36.358000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.022 [2024-04-19 03:30:36.473871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.022 [2024-04-19 03:30:36.473938] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.022 [2024-04-19 03:30:36.473965] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.023 [2024-04-19 03:30:36.473981] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.023 [2024-04-19 03:30:36.473993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.023 [2024-04-19 03:30:36.474095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.023 [2024-04-19 03:30:36.474173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.023 [2024-04-19 03:30:36.474259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.023 [2024-04-19 03:30:36.474262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.955 03:30:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.955 03:30:37 -- common/autotest_common.sh@850 -- # return 0 00:16:59.955 03:30:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:59.955 03:30:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:59.955 03:30:37 -- common/autotest_common.sh@10 -- # set +x 00:16:59.955 03:30:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.955 03:30:37 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:59.955 03:30:37 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:03.235 03:30:40 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:03.235 03:30:40 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:03.235 03:30:40 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:03.235 03:30:40 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.493 03:30:40 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:03.493 03:30:40 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:03.493 03:30:40 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:03.493 03:30:40 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:03.493 03:30:40 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:03.750 [2024-04-19 03:30:41.074314] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.750 03:30:41 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.008 03:30:41 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:04.008 03:30:41 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.266 03:30:41 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:04.266 03:30:41 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:04.266 03:30:41 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.523 [2024-04-19 03:30:42.045880] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.523 03:30:42 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.781 03:30:42 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:17:04.781 03:30:42 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:04.781 03:30:42 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:04.782 03:30:42 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:06.158 Initializing NVMe Controllers 00:17:06.158 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:17:06.158 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:17:06.158 Initialization complete. Launching workers. 00:17:06.158 ======================================================== 00:17:06.158 Latency(us) 00:17:06.158 Device Information : IOPS MiB/s Average min max 00:17:06.158 PCIE (0000:88:00.0) NSID 1 from core 0: 85261.22 333.05 374.85 10.44 4507.60 00:17:06.158 ======================================================== 00:17:06.158 Total : 85261.22 333.05 374.85 10.44 4507.60 00:17:06.158 00:17:06.158 03:30:43 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:06.158 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.532 Initializing NVMe Controllers 00:17:07.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:07.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:07.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:07.532 Initialization complete. Launching workers. 00:17:07.532 ======================================================== 00:17:07.532 Latency(us) 00:17:07.532 Device Information : IOPS MiB/s Average min max 00:17:07.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12198.51 187.53 46754.46 00:17:07.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.00 0.25 15453.36 4962.82 55846.12 00:17:07.532 ======================================================== 00:17:07.532 Total : 148.00 0.58 13628.00 187.53 55846.12 00:17:07.532 00:17:07.532 03:30:44 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:07.532 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.908 Initializing NVMe Controllers 00:17:08.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:08.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:08.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:08.909 Initialization complete. Launching workers. 00:17:08.909 ======================================================== 00:17:08.909 Latency(us) 00:17:08.909 Device Information : IOPS MiB/s Average min max 00:17:08.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8362.44 32.67 3830.22 413.76 7587.32 00:17:08.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3859.05 15.07 8323.31 3925.62 16071.25 00:17:08.909 ======================================================== 00:17:08.909 Total : 12221.49 47.74 5248.95 413.76 16071.25 00:17:08.909 00:17:08.909 03:30:46 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:08.909 03:30:46 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:08.909 03:30:46 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:08.909 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.509 Initializing NVMe Controllers 00:17:11.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.509 Controller IO queue size 128, less than required. 00:17:11.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:11.509 Controller IO queue size 128, less than required. 00:17:11.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:11.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:11.509 Initialization complete. Launching workers. 00:17:11.509 ======================================================== 00:17:11.509 Latency(us) 00:17:11.509 Device Information : IOPS MiB/s Average min max 00:17:11.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 945.56 236.39 142475.36 73885.61 199028.73 00:17:11.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.81 144.45 224514.69 112345.73 341251.63 00:17:11.509 ======================================================== 00:17:11.509 Total : 1523.37 380.84 173592.80 73885.61 341251.63 00:17:11.509 00:17:11.509 03:30:48 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:11.509 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.768 No valid NVMe controllers or AIO or URING devices found 00:17:11.768 Initializing NVMe Controllers 00:17:11.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.768 Controller IO queue size 128, less than required. 00:17:11.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:11.768 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:11.768 Controller IO queue size 128, less than required. 00:17:11.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:11.768 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:11.768 WARNING: Some requested NVMe devices were skipped 00:17:11.768 03:30:49 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:11.768 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.049 Initializing NVMe Controllers 00:17:15.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.049 Controller IO queue size 128, less than required. 00:17:15.049 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.049 Controller IO queue size 128, less than required. 00:17:15.049 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:15.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:15.049 Initialization complete. Launching workers. 00:17:15.049 00:17:15.049 ==================== 00:17:15.049 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:15.049 TCP transport: 00:17:15.049 polls: 25959 00:17:15.049 idle_polls: 8506 00:17:15.049 sock_completions: 17453 00:17:15.049 nvme_completions: 4531 00:17:15.049 submitted_requests: 6748 00:17:15.049 queued_requests: 1 00:17:15.049 00:17:15.049 ==================== 00:17:15.049 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:15.049 TCP transport: 00:17:15.049 polls: 30818 00:17:15.049 idle_polls: 11844 00:17:15.049 sock_completions: 18974 00:17:15.049 nvme_completions: 3705 00:17:15.049 submitted_requests: 5600 00:17:15.049 queued_requests: 1 00:17:15.049 ======================================================== 00:17:15.049 Latency(us) 00:17:15.049 Device Information : IOPS MiB/s Average min max 00:17:15.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1130.70 282.68 116799.41 55950.52 179078.89 00:17:15.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 924.53 231.13 143273.99 58059.58 199410.02 00:17:15.049 ======================================================== 00:17:15.049 Total : 2055.23 513.81 128708.79 55950.52 199410.02 00:17:15.049 00:17:15.049 03:30:51 -- host/perf.sh@66 -- # sync 00:17:15.049 03:30:51 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.049 03:30:52 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:15.049 03:30:52 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:15.049 03:30:52 -- host/perf.sh@114 -- # nvmftestfini 00:17:15.049 03:30:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:15.049 03:30:52 -- nvmf/common.sh@117 -- # sync 00:17:15.049 03:30:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.049 03:30:52 -- nvmf/common.sh@120 -- # set +e 00:17:15.049 03:30:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.049 03:30:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.049 rmmod nvme_tcp 00:17:15.049 rmmod nvme_fabrics 00:17:15.049 rmmod nvme_keyring 00:17:15.049 03:30:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.049 03:30:52 -- nvmf/common.sh@124 -- # set -e 00:17:15.049 03:30:52 -- nvmf/common.sh@125 -- # return 0 00:17:15.049 03:30:52 -- nvmf/common.sh@478 -- # '[' -n 286068 ']' 00:17:15.049 03:30:52 -- nvmf/common.sh@479 -- # killprocess 286068 00:17:15.049 03:30:52 -- common/autotest_common.sh@936 -- # '[' -z 286068 ']' 00:17:15.049 03:30:52 -- common/autotest_common.sh@940 -- # kill -0 286068 00:17:15.049 03:30:52 -- common/autotest_common.sh@941 -- # uname 00:17:15.049 03:30:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.049 03:30:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 286068 00:17:15.049 03:30:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:15.049 03:30:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:15.049 03:30:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 286068' 00:17:15.049 killing process with pid 286068 00:17:15.049 03:30:52 -- common/autotest_common.sh@955 -- # kill 286068 00:17:15.049 03:30:52 -- common/autotest_common.sh@960 -- # wait 286068 00:17:16.423 03:30:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:16.423 03:30:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:16.423 03:30:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:16.423 03:30:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.423 03:30:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.423 03:30:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.423 03:30:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.423 03:30:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.959 03:30:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.959 00:17:18.959 real 0m21.924s 00:17:18.959 user 1m8.564s 00:17:18.959 sys 0m5.066s 00:17:18.959 03:30:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:18.959 03:30:55 -- common/autotest_common.sh@10 -- # set +x 00:17:18.959 ************************************ 00:17:18.959 END TEST nvmf_perf 00:17:18.959 ************************************ 00:17:18.959 03:30:55 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:18.959 03:30:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:18.959 03:30:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.959 03:30:55 -- common/autotest_common.sh@10 -- # set +x 00:17:18.959 ************************************ 00:17:18.959 START TEST nvmf_fio_host 00:17:18.959 ************************************ 00:17:18.959 03:30:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:18.959 * Looking for test storage... 00:17:18.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:18.959 03:30:56 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.959 03:30:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.959 03:30:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.959 03:30:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.959 03:30:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.959 03:30:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.959 03:30:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.959 03:30:56 -- paths/export.sh@5 -- # export PATH 00:17:18.959 03:30:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.959 03:30:56 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.959 03:30:56 -- nvmf/common.sh@7 -- # uname -s 00:17:18.959 03:30:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.959 03:30:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.959 03:30:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.959 03:30:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.959 03:30:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.959 03:30:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.959 03:30:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.959 03:30:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.959 03:30:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.959 03:30:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.959 03:30:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.959 03:30:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.959 03:30:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.959 03:30:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.959 03:30:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.959 03:30:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.959 03:30:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.959 03:30:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.959 03:30:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.960 03:30:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.960 03:30:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.960 03:30:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.960 03:30:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.960 03:30:56 -- paths/export.sh@5 -- # export PATH 00:17:18.960 03:30:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.960 03:30:56 -- nvmf/common.sh@47 -- # : 0 00:17:18.960 03:30:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.960 03:30:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.960 03:30:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.960 03:30:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.960 03:30:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.960 03:30:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.960 03:30:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.960 03:30:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.960 03:30:56 -- host/fio.sh@12 -- # nvmftestinit 00:17:18.960 03:30:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:18.960 03:30:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.960 03:30:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:18.960 03:30:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:18.960 03:30:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:18.960 03:30:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.960 03:30:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.960 03:30:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.960 03:30:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:18.960 03:30:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:18.960 03:30:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.960 03:30:56 -- common/autotest_common.sh@10 -- # set +x 00:17:20.862 03:30:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.862 03:30:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.862 03:30:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.862 03:30:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.862 03:30:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.862 03:30:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.862 03:30:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.862 03:30:57 -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.862 03:30:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.862 03:30:57 -- nvmf/common.sh@296 -- # e810=() 00:17:20.862 03:30:57 -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.862 03:30:57 -- nvmf/common.sh@297 -- # x722=() 00:17:20.862 03:30:57 -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.862 03:30:57 -- nvmf/common.sh@298 -- # mlx=() 00:17:20.862 03:30:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.862 03:30:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.862 03:30:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.862 03:30:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:20.862 03:30:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.862 03:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.862 03:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:20.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:20.862 03:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.862 03:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:20.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:20.862 03:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.862 03:30:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.862 03:30:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:20.862 03:30:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:20.862 03:30:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.862 03:30:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.862 03:30:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:20.862 03:30:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.862 03:30:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:20.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:20.862 03:30:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.862 03:30:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.862 03:30:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.862 03:30:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:20.862 03:30:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.862 03:30:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:20.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:20.862 03:30:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.862 03:30:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:20.862 03:30:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:20.862 03:30:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:20.862 03:30:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:20.862 03:30:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:20.862 03:30:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.862 03:30:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.862 03:30:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.862 03:30:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:20.862 03:30:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.862 03:30:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.862 03:30:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:20.862 03:30:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.862 03:30:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.862 03:30:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:20.862 03:30:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:20.862 03:30:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.862 03:30:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.862 03:30:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.862 03:30:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.862 03:30:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:20.862 03:30:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.862 03:30:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.862 03:30:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.862 03:30:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:20.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:17:20.862 00:17:20.862 --- 10.0.0.2 ping statistics --- 00:17:20.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.862 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:20.862 03:30:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:17:20.862 00:17:20.862 --- 10.0.0.1 ping statistics --- 00:17:20.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.863 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:20.863 03:30:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.863 03:30:58 -- nvmf/common.sh@411 -- # return 0 00:17:20.863 03:30:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:20.863 03:30:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.863 03:30:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:20.863 03:30:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:20.863 03:30:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.863 03:30:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:20.863 03:30:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:20.863 03:30:58 -- host/fio.sh@14 -- # [[ y != y ]] 00:17:20.863 03:30:58 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:20.863 03:30:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:20.863 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:20.863 03:30:58 -- host/fio.sh@22 -- # nvmfpid=290163 00:17:20.863 03:30:58 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.863 03:30:58 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.863 03:30:58 -- host/fio.sh@26 -- # waitforlisten 290163 00:17:20.863 03:30:58 -- common/autotest_common.sh@817 -- # '[' -z 290163 ']' 00:17:20.863 03:30:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.863 03:30:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:20.863 03:30:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.863 03:30:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:20.863 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:20.863 [2024-04-19 03:30:58.207119] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:17:20.863 [2024-04-19 03:30:58.207191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.863 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.863 [2024-04-19 03:30:58.269919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.863 [2024-04-19 03:30:58.376531] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.863 [2024-04-19 03:30:58.376584] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.863 [2024-04-19 03:30:58.376608] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.863 [2024-04-19 03:30:58.376619] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.863 [2024-04-19 03:30:58.376629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.863 [2024-04-19 03:30:58.376681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.863 [2024-04-19 03:30:58.376742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.863 [2024-04-19 03:30:58.376772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.863 [2024-04-19 03:30:58.376775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.121 03:30:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:21.121 03:30:58 -- common/autotest_common.sh@850 -- # return 0 00:17:21.121 03:30:58 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.121 03:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 [2024-04-19 03:30:58.511852] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.121 03:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.121 03:30:58 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:21.121 03:30:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 03:30:58 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:21.121 03:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 Malloc1 00:17:21.121 03:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.121 03:30:58 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.121 03:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 03:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.121 03:30:58 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:21.121 03:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 03:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.121 03:30:58 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.121 03:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 [2024-04-19 03:30:58.589069] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.121 03:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.121 03:30:58 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.121 03:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.121 03:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 03:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.121 03:30:58 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:17:21.121 03:30:58 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:21.121 03:30:58 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:21.121 03:30:58 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:21.121 03:30:58 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.121 03:30:58 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:21.121 03:30:58 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:21.121 03:30:58 -- common/autotest_common.sh@1327 -- # shift 00:17:21.121 03:30:58 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:21.121 03:30:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.121 03:30:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:21.121 03:30:58 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:21.121 03:30:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:21.121 03:30:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:21.121 03:30:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:21.121 03:30:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.121 03:30:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:21.121 03:30:58 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:21.122 03:30:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:21.122 03:30:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:21.122 03:30:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:21.122 03:30:58 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:21.122 03:30:58 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:21.379 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:21.379 fio-3.35 00:17:21.379 Starting 1 thread 00:17:21.379 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.905 00:17:23.905 test: (groupid=0, jobs=1): err= 0: pid=290264: Fri Apr 19 03:31:01 2024 00:17:23.905 read: IOPS=8899, BW=34.8MiB/s (36.5MB/s)(69.7MiB/2006msec) 00:17:23.905 slat (nsec): min=1961, max=277091, avg=2538.18, stdev=2552.87 00:17:23.905 clat (usec): min=2683, max=14186, avg=7954.24, stdev=650.72 00:17:23.905 lat (usec): min=2710, max=14188, avg=7956.78, stdev=650.57 00:17:23.905 clat percentiles (usec): 00:17:23.905 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:17:23.905 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:17:23.905 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:17:23.905 | 99.00th=[ 9765], 99.50th=[10814], 99.90th=[12518], 99.95th=[13435], 00:17:23.905 | 99.99th=[14222] 00:17:23.905 bw ( KiB/s): min=34920, max=35848, per=99.90%, avg=35560.00, stdev=433.92, samples=4 00:17:23.905 iops : min= 8730, max= 8962, avg=8890.00, stdev=108.48, samples=4 00:17:23.905 write: IOPS=8914, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2006msec); 0 zone resets 00:17:23.905 slat (usec): min=2, max=130, avg= 2.68, stdev= 1.41 00:17:23.905 clat (usec): min=1655, max=12055, avg=6374.67, stdev=557.90 00:17:23.905 lat (usec): min=1664, max=12058, avg=6377.35, stdev=557.84 00:17:23.905 clat percentiles (usec): 00:17:23.905 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5997], 00:17:23.905 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:17:23.905 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:17:23.905 | 99.00th=[ 7898], 99.50th=[ 8848], 99.90th=[10028], 99.95th=[10945], 00:17:23.905 | 99.99th=[11994] 00:17:23.905 bw ( KiB/s): min=34976, max=36032, per=99.99%, avg=35654.00, stdev=465.53, samples=4 00:17:23.905 iops : min= 8744, max= 9008, avg=8913.50, stdev=116.38, samples=4 00:17:23.905 lat (msec) : 2=0.01%, 4=0.12%, 10=99.39%, 20=0.48% 00:17:23.906 cpu : usr=58.75%, sys=35.06%, ctx=40, majf=0, minf=5 00:17:23.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:23.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.906 issued rwts: total=17852,17882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.906 00:17:23.906 Run status group 0 (all jobs): 00:17:23.906 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:17:23.906 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.2MB), run=2006-2006msec 00:17:23.906 03:31:01 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:23.906 03:31:01 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:23.906 03:31:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:23.906 03:31:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.906 03:31:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:23.906 03:31:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:23.906 03:31:01 -- common/autotest_common.sh@1327 -- # shift 00:17:23.906 03:31:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:23.906 03:31:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:23.906 03:31:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:23.906 03:31:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:23.906 03:31:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:23.906 03:31:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:23.906 03:31:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:23.906 03:31:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:23.906 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:23.906 fio-3.35 00:17:23.906 Starting 1 thread 00:17:23.906 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.433 00:17:26.433 test: (groupid=0, jobs=1): err= 0: pid=290720: Fri Apr 19 03:31:03 2024 00:17:26.433 read: IOPS=8214, BW=128MiB/s (135MB/s)(258MiB/2008msec) 00:17:26.433 slat (nsec): min=2796, max=91437, avg=3434.26, stdev=1484.22 00:17:26.433 clat (usec): min=2944, max=17502, avg=9124.10, stdev=2200.18 00:17:26.433 lat (usec): min=2947, max=17506, avg=9127.54, stdev=2200.20 00:17:26.433 clat percentiles (usec): 00:17:26.433 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6456], 20.00th=[ 7308], 00:17:26.433 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:17:26.433 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12125], 95.00th=[12911], 00:17:26.433 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16450], 99.95th=[16581], 00:17:26.433 | 99.99th=[17171] 00:17:26.433 bw ( KiB/s): min=61600, max=77792, per=52.48%, avg=68976.00, stdev=8215.86, samples=4 00:17:26.433 iops : min= 3850, max= 4862, avg=4311.00, stdev=513.49, samples=4 00:17:26.434 write: IOPS=4899, BW=76.6MiB/s (80.3MB/s)(141MiB/1842msec); 0 zone resets 00:17:26.434 slat (usec): min=30, max=124, avg=33.19, stdev= 4.66 00:17:26.434 clat (usec): min=5878, max=18937, avg=11044.93, stdev=2070.36 00:17:26.434 lat (usec): min=5910, max=18971, avg=11078.12, stdev=2070.45 00:17:26.434 clat percentiles (usec): 00:17:26.434 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9372], 00:17:26.434 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:17:26.434 | 70.00th=[11863], 80.00th=[12649], 90.00th=[14222], 95.00th=[15008], 00:17:26.434 | 99.00th=[16057], 99.50th=[16581], 99.90th=[18220], 99.95th=[18482], 00:17:26.434 | 99.99th=[19006] 00:17:26.434 bw ( KiB/s): min=64384, max=79392, per=91.19%, avg=71488.00, stdev=8074.45, samples=4 00:17:26.434 iops : min= 4024, max= 4962, avg=4468.00, stdev=504.65, samples=4 00:17:26.434 lat (msec) : 4=0.17%, 10=57.20%, 20=42.63% 00:17:26.434 cpu : usr=74.19%, sys=22.52%, ctx=30, majf=0, minf=1 00:17:26.434 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:26.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:26.434 issued rwts: total=16495,9025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:26.434 00:17:26.434 Run status group 0 (all jobs): 00:17:26.434 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=258MiB (270MB), run=2008-2008msec 00:17:26.434 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=141MiB (148MB), run=1842-1842msec 00:17:26.434 03:31:03 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.434 03:31:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.434 03:31:03 -- common/autotest_common.sh@10 -- # set +x 00:17:26.434 03:31:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.434 03:31:03 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:26.434 03:31:03 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:26.434 03:31:03 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:26.434 03:31:03 -- host/fio.sh@84 -- # nvmftestfini 00:17:26.434 03:31:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:26.434 03:31:03 -- nvmf/common.sh@117 -- # sync 00:17:26.434 03:31:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.434 03:31:03 -- nvmf/common.sh@120 -- # set +e 00:17:26.434 03:31:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.434 03:31:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.434 rmmod nvme_tcp 00:17:26.434 rmmod nvme_fabrics 00:17:26.434 rmmod nvme_keyring 00:17:26.434 03:31:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.434 03:31:03 -- nvmf/common.sh@124 -- # set -e 00:17:26.434 03:31:03 -- nvmf/common.sh@125 -- # return 0 00:17:26.434 03:31:03 -- nvmf/common.sh@478 -- # '[' -n 290163 ']' 00:17:26.434 03:31:03 -- nvmf/common.sh@479 -- # killprocess 290163 00:17:26.434 03:31:03 -- common/autotest_common.sh@936 -- # '[' -z 290163 ']' 00:17:26.434 03:31:03 -- common/autotest_common.sh@940 -- # kill -0 290163 00:17:26.434 03:31:03 -- common/autotest_common.sh@941 -- # uname 00:17:26.434 03:31:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.434 03:31:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 290163 00:17:26.434 03:31:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:26.434 03:31:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:26.434 03:31:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 290163' 00:17:26.434 killing process with pid 290163 00:17:26.434 03:31:03 -- common/autotest_common.sh@955 -- # kill 290163 00:17:26.434 03:31:03 -- common/autotest_common.sh@960 -- # wait 290163 00:17:26.693 03:31:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:26.693 03:31:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:26.693 03:31:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:26.693 03:31:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.693 03:31:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.693 03:31:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.693 03:31:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.693 03:31:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.229 03:31:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.229 00:17:29.229 real 0m10.136s 00:17:29.229 user 0m26.319s 00:17:29.229 sys 0m3.667s 00:17:29.229 03:31:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:29.229 03:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:29.229 ************************************ 00:17:29.229 END TEST nvmf_fio_host 00:17:29.229 ************************************ 00:17:29.229 03:31:06 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:29.229 03:31:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:29.229 03:31:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:29.229 03:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:29.229 ************************************ 00:17:29.229 START TEST nvmf_failover 00:17:29.229 ************************************ 00:17:29.230 03:31:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:29.230 * Looking for test storage... 00:17:29.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:29.230 03:31:06 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.230 03:31:06 -- nvmf/common.sh@7 -- # uname -s 00:17:29.230 03:31:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.230 03:31:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.230 03:31:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.230 03:31:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.230 03:31:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.230 03:31:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.230 03:31:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.230 03:31:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.230 03:31:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.230 03:31:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.230 03:31:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.230 03:31:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.230 03:31:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.230 03:31:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.230 03:31:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.230 03:31:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.230 03:31:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.230 03:31:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.230 03:31:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.230 03:31:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.230 03:31:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.230 03:31:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.230 03:31:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.230 03:31:06 -- paths/export.sh@5 -- # export PATH 00:17:29.230 03:31:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.230 03:31:06 -- nvmf/common.sh@47 -- # : 0 00:17:29.230 03:31:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.230 03:31:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.230 03:31:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.230 03:31:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.230 03:31:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.230 03:31:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.230 03:31:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.230 03:31:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.230 03:31:06 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.230 03:31:06 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.230 03:31:06 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.230 03:31:06 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.230 03:31:06 -- host/failover.sh@18 -- # nvmftestinit 00:17:29.230 03:31:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:29.230 03:31:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.230 03:31:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:29.230 03:31:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:29.230 03:31:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:29.230 03:31:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.230 03:31:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.230 03:31:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.230 03:31:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:29.230 03:31:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:29.230 03:31:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.230 03:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:31.133 03:31:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:31.133 03:31:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:31.133 03:31:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:31.133 03:31:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:31.133 03:31:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:31.133 03:31:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:31.133 03:31:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:31.133 03:31:08 -- nvmf/common.sh@295 -- # net_devs=() 00:17:31.133 03:31:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:31.133 03:31:08 -- nvmf/common.sh@296 -- # e810=() 00:17:31.133 03:31:08 -- nvmf/common.sh@296 -- # local -ga e810 00:17:31.133 03:31:08 -- nvmf/common.sh@297 -- # x722=() 00:17:31.133 03:31:08 -- nvmf/common.sh@297 -- # local -ga x722 00:17:31.133 03:31:08 -- nvmf/common.sh@298 -- # mlx=() 00:17:31.133 03:31:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:31.133 03:31:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.133 03:31:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.133 03:31:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.133 03:31:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.133 03:31:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.133 03:31:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.133 03:31:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.134 03:31:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.134 03:31:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.134 03:31:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.134 03:31:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.134 03:31:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:31.134 03:31:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:31.134 03:31:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:31.134 03:31:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.134 03:31:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:31.134 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:31.134 03:31:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.134 03:31:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:31.134 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:31.134 03:31:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:31.134 03:31:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.134 03:31:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.134 03:31:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:31.134 03:31:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.134 03:31:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:31.134 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:31.134 03:31:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.134 03:31:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.134 03:31:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.134 03:31:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:31.134 03:31:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.134 03:31:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:31.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:31.134 03:31:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.134 03:31:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:31.134 03:31:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:31.134 03:31:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:31.134 03:31:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.134 03:31:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.134 03:31:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.134 03:31:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:31.134 03:31:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.134 03:31:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.134 03:31:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:31.134 03:31:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.134 03:31:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.134 03:31:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:31.134 03:31:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:31.134 03:31:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.134 03:31:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.134 03:31:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.134 03:31:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.134 03:31:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:31.134 03:31:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.134 03:31:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.134 03:31:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.134 03:31:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:31.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:17:31.134 00:17:31.134 --- 10.0.0.2 ping statistics --- 00:17:31.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.134 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:17:31.134 03:31:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:17:31.134 00:17:31.134 --- 10.0.0.1 ping statistics --- 00:17:31.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.134 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:31.134 03:31:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.134 03:31:08 -- nvmf/common.sh@411 -- # return 0 00:17:31.134 03:31:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:31.134 03:31:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.134 03:31:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:31.134 03:31:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.134 03:31:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:31.134 03:31:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:31.134 03:31:08 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:31.134 03:31:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:31.134 03:31:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:31.134 03:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:31.134 03:31:08 -- nvmf/common.sh@470 -- # nvmfpid=292912 00:17:31.134 03:31:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:31.134 03:31:08 -- nvmf/common.sh@471 -- # waitforlisten 292912 00:17:31.134 03:31:08 -- common/autotest_common.sh@817 -- # '[' -z 292912 ']' 00:17:31.134 03:31:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.134 03:31:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.134 03:31:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.134 03:31:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.135 03:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:31.135 [2024-04-19 03:31:08.578520] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:17:31.135 [2024-04-19 03:31:08.578605] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.135 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.135 [2024-04-19 03:31:08.657545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:31.392 [2024-04-19 03:31:08.779111] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.392 [2024-04-19 03:31:08.779183] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.392 [2024-04-19 03:31:08.779200] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.392 [2024-04-19 03:31:08.779213] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.392 [2024-04-19 03:31:08.779226] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.392 [2024-04-19 03:31:08.779317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.392 [2024-04-19 03:31:08.779433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.392 [2024-04-19 03:31:08.779438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.392 03:31:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.392 03:31:08 -- common/autotest_common.sh@850 -- # return 0 00:17:31.392 03:31:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:31.392 03:31:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:31.392 03:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:31.392 03:31:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.393 03:31:08 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:31.649 [2024-04-19 03:31:09.179036] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.649 03:31:09 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:32.213 Malloc0 00:17:32.213 03:31:09 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:32.213 03:31:09 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.471 03:31:09 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.730 [2024-04-19 03:31:10.205820] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.730 03:31:10 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:32.987 [2024-04-19 03:31:10.470484] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:32.987 03:31:10 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:33.245 [2024-04-19 03:31:10.723253] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:33.245 03:31:10 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:33.245 03:31:10 -- host/failover.sh@31 -- # bdevperf_pid=293205 00:17:33.245 03:31:10 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.245 03:31:10 -- host/failover.sh@34 -- # waitforlisten 293205 /var/tmp/bdevperf.sock 00:17:33.245 03:31:10 -- common/autotest_common.sh@817 -- # '[' -z 293205 ']' 00:17:33.245 03:31:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.245 03:31:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:33.245 03:31:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.245 03:31:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:33.245 03:31:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 03:31:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.810 03:31:11 -- common/autotest_common.sh@850 -- # return 0 00:17:33.810 03:31:11 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:34.069 NVMe0n1 00:17:34.069 03:31:11 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:34.639 00:17:34.639 03:31:11 -- host/failover.sh@39 -- # run_test_pid=293342 00:17:34.639 03:31:11 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.639 03:31:11 -- host/failover.sh@41 -- # sleep 1 00:17:35.571 03:31:13 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.830 [2024-04-19 03:31:13.228178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 [2024-04-19 03:31:13.228539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770e40 is same with the state(5) to be set 00:17:35.830 03:31:13 -- host/failover.sh@45 -- # sleep 3 00:17:39.109 03:31:16 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:39.109 00:17:39.109 03:31:16 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:39.367 [2024-04-19 03:31:16.834515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772540 is same with the state(5) to be set 00:17:39.367 [2024-04-19 03:31:16.834581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772540 is same with the state(5) to be set 00:17:39.367 [2024-04-19 03:31:16.834597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772540 is same with the state(5) to be set 00:17:39.367 [2024-04-19 03:31:16.834609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772540 is same with the state(5) to be set 00:17:39.367 [2024-04-19 03:31:16.834632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772540 is same with the state(5) to be set 00:17:39.367 [2024-04-19 03:31:16.834644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772540 is same with the state(5) to be set 00:17:39.367 03:31:16 -- host/failover.sh@50 -- # sleep 3 00:17:42.647 03:31:19 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.647 [2024-04-19 03:31:20.109501] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.647 03:31:20 -- host/failover.sh@55 -- # sleep 1 00:17:43.580 03:31:21 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:43.837 [2024-04-19 03:31:21.353697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 [2024-04-19 03:31:21.353891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772c20 is same with the state(5) to be set 00:17:43.837 03:31:21 -- host/failover.sh@59 -- # wait 293342 00:17:50.406 0 00:17:50.406 03:31:27 -- host/failover.sh@61 -- # killprocess 293205 00:17:50.406 03:31:27 -- common/autotest_common.sh@936 -- # '[' -z 293205 ']' 00:17:50.406 03:31:27 -- common/autotest_common.sh@940 -- # kill -0 293205 00:17:50.406 03:31:27 -- common/autotest_common.sh@941 -- # uname 00:17:50.406 03:31:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.406 03:31:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 293205 00:17:50.406 03:31:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.406 03:31:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.406 03:31:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 293205' 00:17:50.406 killing process with pid 293205 00:17:50.406 03:31:27 -- common/autotest_common.sh@955 -- # kill 293205 00:17:50.406 03:31:27 -- common/autotest_common.sh@960 -- # wait 293205 00:17:50.406 03:31:27 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:50.406 [2024-04-19 03:31:10.786918] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:17:50.406 [2024-04-19 03:31:10.787033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293205 ] 00:17:50.406 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.406 [2024-04-19 03:31:10.847478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.406 [2024-04-19 03:31:10.958107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.406 Running I/O for 15 seconds... 00:17:50.406 [2024-04-19 03:31:13.228886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.228926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.228953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.228969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.228985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.228999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.406 [2024-04-19 03:31:13.229426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-04-19 03:31:13.229456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.406 [2024-04-19 03:31:13.229471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.229980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.230019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.230051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.230079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.407 [2024-04-19 03:31:13.230105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.407 [2024-04-19 03:31:13.230459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.407 [2024-04-19 03:31:13.230474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.230988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.408 [2024-04-19 03:31:13.231527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.408 [2024-04-19 03:31:13.231542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.231979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.231995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.409 [2024-04-19 03:31:13.232297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.409 [2024-04-19 03:31:13.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.409 [2024-04-19 03:31:13.232531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.409 [2024-04-19 03:31:13.232546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:13.232822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7e70 is same with the state(5) to be set 00:17:50.410 [2024-04-19 03:31:13.232854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:50.410 [2024-04-19 03:31:13.232865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:50.410 [2024-04-19 03:31:13.232876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75752 len:8 PRP1 0x0 PRP2 0x0 00:17:50.410 [2024-04-19 03:31:13.232889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.232951] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18d7e70 was disconnected and freed. reset controller. 00:17:50.410 [2024-04-19 03:31:13.232969] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:50.410 [2024-04-19 03:31:13.233016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.410 [2024-04-19 03:31:13.233036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.233051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.410 [2024-04-19 03:31:13.233064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.233078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.410 [2024-04-19 03:31:13.233091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.233106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.410 [2024-04-19 03:31:13.233120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:13.233134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:50.410 [2024-04-19 03:31:13.233182] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b93f0 (9): Bad file descriptor 00:17:50.410 [2024-04-19 03:31:13.236490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.410 [2024-04-19 03:31:13.401894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.410 [2024-04-19 03:31:16.834792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.834837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.834866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.834895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.834913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.834928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.834943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.834958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.834973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.834987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.835016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.410 [2024-04-19 03:31:16.835045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.410 [2024-04-19 03:31:16.835286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.410 [2024-04-19 03:31:16.835301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.411 [2024-04-19 03:31:16.835329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.411 [2024-04-19 03:31:16.835374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.835956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.835972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.836003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.411 [2024-04-19 03:31:16.836017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.411 [2024-04-19 03:31:16.836031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.412 [2024-04-19 03:31:16.836046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.412 [2024-04-19 03:31:16.836075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.412 [2024-04-19 03:31:16.836111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.412 [2024-04-19 03:31:16.836140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.412 [2024-04-19 03:31:16.836169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.412 [2024-04-19 03:31:16.836942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.412 [2024-04-19 03:31:16.836957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.836971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.836985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.836998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.413 [2024-04-19 03:31:16.837339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.413 [2024-04-19 03:31:16.837826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.413 [2024-04-19 03:31:16.837840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.837854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.837869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.837883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.837897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.837911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.837925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.837938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.837953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.837967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.837981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.837994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.414 [2024-04-19 03:31:16.838784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.414 [2024-04-19 03:31:16.838802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c58e0 is same with the state(5) to be set 00:17:50.414 [2024-04-19 03:31:16.838819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:50.415 [2024-04-19 03:31:16.838831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:50.415 [2024-04-19 03:31:16.838842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101752 len:8 PRP1 0x0 PRP2 0x0 00:17:50.415 [2024-04-19 03:31:16.838855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:16.838918] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18c58e0 was disconnected and freed. reset controller. 00:17:50.415 [2024-04-19 03:31:16.838936] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:50.415 [2024-04-19 03:31:16.838983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.415 [2024-04-19 03:31:16.839002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:16.839018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.415 [2024-04-19 03:31:16.839032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:16.839046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.415 [2024-04-19 03:31:16.839059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:16.839073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.415 [2024-04-19 03:31:16.839086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:16.839100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:50.415 [2024-04-19 03:31:16.842406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.415 [2024-04-19 03:31:16.842449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b93f0 (9): Bad file descriptor 00:17:50.415 [2024-04-19 03:31:16.914904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.415 [2024-04-19 03:31:21.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.415 [2024-04-19 03:31:21.354791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.415 [2024-04-19 03:31:21.354804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.354981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.354998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.416 [2024-04-19 03:31:21.355439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.416 [2024-04-19 03:31:21.355636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.416 [2024-04-19 03:31:21.355651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-04-19 03:31:21.355944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.355972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.355987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-04-19 03:31:21.356284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.417 [2024-04-19 03:31:21.356299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.356975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.356988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-04-19 03:31:21.357310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.418 [2024-04-19 03:31:21.357325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-04-19 03:31:21.357636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-04-19 03:31:21.357666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-04-19 03:31:21.357696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-04-19 03:31:21.357912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.357927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc670 is same with the state(5) to be set 00:17:50.419 [2024-04-19 03:31:21.357943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:50.419 [2024-04-19 03:31:21.357954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:50.419 [2024-04-19 03:31:21.357966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31296 len:8 PRP1 0x0 PRP2 0x0 00:17:50.419 [2024-04-19 03:31:21.357978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.358036] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18dc670 was disconnected and freed. reset controller. 00:17:50.419 [2024-04-19 03:31:21.358054] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:50.419 [2024-04-19 03:31:21.358102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.419 [2024-04-19 03:31:21.358126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.358142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.419 [2024-04-19 03:31:21.358156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.358170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.419 [2024-04-19 03:31:21.358184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.358197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.419 [2024-04-19 03:31:21.358211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.419 [2024-04-19 03:31:21.358224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:50.419 [2024-04-19 03:31:21.358276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b93f0 (9): Bad file descriptor 00:17:50.419 [2024-04-19 03:31:21.361534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.419 [2024-04-19 03:31:21.518519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.419 00:17:50.419 Latency(us) 00:17:50.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.419 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:50.419 Verification LBA range: start 0x0 length 0x4000 00:17:50.419 NVMe0n1 : 15.01 8274.11 32.32 1044.52 0.00 13708.78 794.93 16117.00 00:17:50.419 =================================================================================================================== 00:17:50.419 Total : 8274.11 32.32 1044.52 0.00 13708.78 794.93 16117.00 00:17:50.419 Received shutdown signal, test time was about 15.000000 seconds 00:17:50.419 00:17:50.419 Latency(us) 00:17:50.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.420 =================================================================================================================== 00:17:50.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.420 03:31:27 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:50.420 03:31:27 -- host/failover.sh@65 -- # count=3 00:17:50.420 03:31:27 -- host/failover.sh@67 -- # (( count != 3 )) 00:17:50.420 03:31:27 -- host/failover.sh@73 -- # bdevperf_pid=295183 00:17:50.420 03:31:27 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:50.420 03:31:27 -- host/failover.sh@75 -- # waitforlisten 295183 /var/tmp/bdevperf.sock 00:17:50.420 03:31:27 -- common/autotest_common.sh@817 -- # '[' -z 295183 ']' 00:17:50.420 03:31:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.420 03:31:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:50.420 03:31:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.420 03:31:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:50.420 03:31:27 -- common/autotest_common.sh@10 -- # set +x 00:17:50.420 03:31:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:50.420 03:31:27 -- common/autotest_common.sh@850 -- # return 0 00:17:50.420 03:31:27 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:50.677 [2024-04-19 03:31:27.973584] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:50.677 03:31:27 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:50.677 [2024-04-19 03:31:28.210209] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:50.677 03:31:28 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.241 NVMe0n1 00:17:51.241 03:31:28 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.499 00:17:51.499 03:31:28 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.756 00:17:51.756 03:31:29 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:51.756 03:31:29 -- host/failover.sh@82 -- # grep -q NVMe0 00:17:52.014 03:31:29 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.271 03:31:29 -- host/failover.sh@87 -- # sleep 3 00:17:55.549 03:31:32 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:55.549 03:31:32 -- host/failover.sh@88 -- # grep -q NVMe0 00:17:55.549 03:31:33 -- host/failover.sh@90 -- # run_test_pid=295853 00:17:55.549 03:31:33 -- host/failover.sh@92 -- # wait 295853 00:17:55.549 03:31:33 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.922 0 00:17:56.922 03:31:34 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:56.922 [2024-04-19 03:31:27.472916] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:17:56.922 [2024-04-19 03:31:27.473015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295183 ] 00:17:56.922 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.922 [2024-04-19 03:31:27.532650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.922 [2024-04-19 03:31:27.638785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.922 [2024-04-19 03:31:29.781501] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:56.922 [2024-04-19 03:31:29.781590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.922 [2024-04-19 03:31:29.781613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.922 [2024-04-19 03:31:29.781629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.922 [2024-04-19 03:31:29.781643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.922 [2024-04-19 03:31:29.781656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.922 [2024-04-19 03:31:29.781669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.922 [2024-04-19 03:31:29.781683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.922 [2024-04-19 03:31:29.781697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.922 [2024-04-19 03:31:29.781710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.922 [2024-04-19 03:31:29.781757] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:56.922 [2024-04-19 03:31:29.781789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c133f0 (9): Bad file descriptor 00:17:56.922 [2024-04-19 03:31:29.872547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:56.922 Running I/O for 1 seconds... 00:17:56.922 00:17:56.923 Latency(us) 00:17:56.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.923 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:56.923 Verification LBA range: start 0x0 length 0x4000 00:17:56.923 NVMe0n1 : 1.01 8485.38 33.15 0.00 0.00 15023.39 3179.71 20291.89 00:17:56.923 =================================================================================================================== 00:17:56.923 Total : 8485.38 33.15 0.00 0.00 15023.39 3179.71 20291.89 00:17:56.923 03:31:34 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:56.923 03:31:34 -- host/failover.sh@95 -- # grep -q NVMe0 00:17:57.179 03:31:34 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.436 03:31:34 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:57.436 03:31:34 -- host/failover.sh@99 -- # grep -q NVMe0 00:17:57.693 03:31:35 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.950 03:31:35 -- host/failover.sh@101 -- # sleep 3 00:18:01.259 03:31:38 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:01.259 03:31:38 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:01.259 03:31:38 -- host/failover.sh@108 -- # killprocess 295183 00:18:01.259 03:31:38 -- common/autotest_common.sh@936 -- # '[' -z 295183 ']' 00:18:01.259 03:31:38 -- common/autotest_common.sh@940 -- # kill -0 295183 00:18:01.259 03:31:38 -- common/autotest_common.sh@941 -- # uname 00:18:01.259 03:31:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.259 03:31:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 295183 00:18:01.259 03:31:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:01.259 03:31:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:01.259 03:31:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 295183' 00:18:01.259 killing process with pid 295183 00:18:01.259 03:31:38 -- common/autotest_common.sh@955 -- # kill 295183 00:18:01.259 03:31:38 -- common/autotest_common.sh@960 -- # wait 295183 00:18:01.516 03:31:38 -- host/failover.sh@110 -- # sync 00:18:01.516 03:31:38 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.516 03:31:39 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:01.516 03:31:39 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:01.516 03:31:39 -- host/failover.sh@116 -- # nvmftestfini 00:18:01.516 03:31:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:01.516 03:31:39 -- nvmf/common.sh@117 -- # sync 00:18:01.516 03:31:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.516 03:31:39 -- nvmf/common.sh@120 -- # set +e 00:18:01.516 03:31:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.516 03:31:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.773 rmmod nvme_tcp 00:18:01.773 rmmod nvme_fabrics 00:18:01.773 rmmod nvme_keyring 00:18:01.773 03:31:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.773 03:31:39 -- nvmf/common.sh@124 -- # set -e 00:18:01.773 03:31:39 -- nvmf/common.sh@125 -- # return 0 00:18:01.773 03:31:39 -- nvmf/common.sh@478 -- # '[' -n 292912 ']' 00:18:01.774 03:31:39 -- nvmf/common.sh@479 -- # killprocess 292912 00:18:01.774 03:31:39 -- common/autotest_common.sh@936 -- # '[' -z 292912 ']' 00:18:01.774 03:31:39 -- common/autotest_common.sh@940 -- # kill -0 292912 00:18:01.774 03:31:39 -- common/autotest_common.sh@941 -- # uname 00:18:01.774 03:31:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.774 03:31:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 292912 00:18:01.774 03:31:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.774 03:31:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.774 03:31:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 292912' 00:18:01.774 killing process with pid 292912 00:18:01.774 03:31:39 -- common/autotest_common.sh@955 -- # kill 292912 00:18:01.774 03:31:39 -- common/autotest_common.sh@960 -- # wait 292912 00:18:02.032 03:31:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:02.032 03:31:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:02.032 03:31:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:02.032 03:31:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.032 03:31:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.032 03:31:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.032 03:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.032 03:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.936 03:31:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.936 00:18:03.936 real 0m35.177s 00:18:03.936 user 2m3.895s 00:18:03.936 sys 0m5.812s 00:18:03.936 03:31:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:03.936 03:31:41 -- common/autotest_common.sh@10 -- # set +x 00:18:03.936 ************************************ 00:18:03.936 END TEST nvmf_failover 00:18:03.936 ************************************ 00:18:04.194 03:31:41 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:04.194 03:31:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:04.194 03:31:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.194 03:31:41 -- common/autotest_common.sh@10 -- # set +x 00:18:04.194 ************************************ 00:18:04.194 START TEST nvmf_discovery 00:18:04.194 ************************************ 00:18:04.194 03:31:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:04.194 * Looking for test storage... 00:18:04.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:04.194 03:31:41 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.194 03:31:41 -- nvmf/common.sh@7 -- # uname -s 00:18:04.194 03:31:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.194 03:31:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.194 03:31:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.194 03:31:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.194 03:31:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.194 03:31:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.194 03:31:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.194 03:31:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.194 03:31:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.194 03:31:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.194 03:31:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.194 03:31:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.194 03:31:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.194 03:31:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.194 03:31:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.194 03:31:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.194 03:31:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.194 03:31:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.194 03:31:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.194 03:31:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.194 03:31:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.194 03:31:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.194 03:31:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.194 03:31:41 -- paths/export.sh@5 -- # export PATH 00:18:04.194 03:31:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.194 03:31:41 -- nvmf/common.sh@47 -- # : 0 00:18:04.194 03:31:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.194 03:31:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.194 03:31:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.194 03:31:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.194 03:31:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.194 03:31:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.194 03:31:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.194 03:31:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.194 03:31:41 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:04.194 03:31:41 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:04.194 03:31:41 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:04.194 03:31:41 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:04.194 03:31:41 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:04.194 03:31:41 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:04.194 03:31:41 -- host/discovery.sh@25 -- # nvmftestinit 00:18:04.194 03:31:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:04.194 03:31:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.194 03:31:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:04.194 03:31:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:04.194 03:31:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:04.194 03:31:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.194 03:31:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.194 03:31:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.194 03:31:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:04.194 03:31:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:04.194 03:31:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:04.194 03:31:41 -- common/autotest_common.sh@10 -- # set +x 00:18:06.094 03:31:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:06.094 03:31:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.094 03:31:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.094 03:31:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.094 03:31:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.094 03:31:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.094 03:31:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.094 03:31:43 -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.094 03:31:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.094 03:31:43 -- nvmf/common.sh@296 -- # e810=() 00:18:06.094 03:31:43 -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.094 03:31:43 -- nvmf/common.sh@297 -- # x722=() 00:18:06.094 03:31:43 -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.094 03:31:43 -- nvmf/common.sh@298 -- # mlx=() 00:18:06.094 03:31:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.094 03:31:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.094 03:31:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.094 03:31:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:06.094 03:31:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.094 03:31:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.094 03:31:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:06.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:06.094 03:31:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.094 03:31:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:06.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:06.094 03:31:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.094 03:31:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.094 03:31:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.094 03:31:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:06.094 03:31:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.094 03:31:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:06.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:06.094 03:31:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.094 03:31:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.094 03:31:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.094 03:31:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:06.094 03:31:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.094 03:31:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:06.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:06.094 03:31:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.094 03:31:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:06.094 03:31:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:06.094 03:31:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:06.094 03:31:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:06.094 03:31:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.094 03:31:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.094 03:31:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.094 03:31:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:06.094 03:31:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.094 03:31:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.094 03:31:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:06.094 03:31:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.094 03:31:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.094 03:31:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.094 03:31:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.094 03:31:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.094 03:31:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.094 03:31:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.094 03:31:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.094 03:31:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.094 03:31:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.353 03:31:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.353 03:31:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.353 03:31:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:18:06.353 00:18:06.353 --- 10.0.0.2 ping statistics --- 00:18:06.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.353 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:06.353 03:31:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:18:06.353 00:18:06.353 --- 10.0.0.1 ping statistics --- 00:18:06.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.353 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:18:06.353 03:31:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.353 03:31:43 -- nvmf/common.sh@411 -- # return 0 00:18:06.353 03:31:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:06.353 03:31:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.353 03:31:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:06.353 03:31:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:06.353 03:31:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.353 03:31:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:06.353 03:31:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:06.354 03:31:43 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:06.354 03:31:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:06.354 03:31:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:06.354 03:31:43 -- common/autotest_common.sh@10 -- # set +x 00:18:06.354 03:31:43 -- nvmf/common.sh@470 -- # nvmfpid=298466 00:18:06.354 03:31:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:06.354 03:31:43 -- nvmf/common.sh@471 -- # waitforlisten 298466 00:18:06.354 03:31:43 -- common/autotest_common.sh@817 -- # '[' -z 298466 ']' 00:18:06.354 03:31:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.354 03:31:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.354 03:31:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.354 03:31:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.354 03:31:43 -- common/autotest_common.sh@10 -- # set +x 00:18:06.354 [2024-04-19 03:31:43.758733] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:18:06.354 [2024-04-19 03:31:43.758821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.354 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.354 [2024-04-19 03:31:43.822547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.612 [2024-04-19 03:31:43.929050] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.612 [2024-04-19 03:31:43.929099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.612 [2024-04-19 03:31:43.929122] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.612 [2024-04-19 03:31:43.929133] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.612 [2024-04-19 03:31:43.929143] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.612 [2024-04-19 03:31:43.929168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.612 03:31:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.612 03:31:44 -- common/autotest_common.sh@850 -- # return 0 00:18:06.612 03:31:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:06.612 03:31:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 03:31:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.612 03:31:44 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.612 03:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 [2024-04-19 03:31:44.062118] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.612 03:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.612 03:31:44 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:06.612 03:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 [2024-04-19 03:31:44.070337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:06.612 03:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.612 03:31:44 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:06.612 03:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 null0 00:18:06.612 03:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.612 03:31:44 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:06.612 03:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 null1 00:18:06.612 03:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.612 03:31:44 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:06.612 03:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 03:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.612 03:31:44 -- host/discovery.sh@45 -- # hostpid=298496 00:18:06.612 03:31:44 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:06.612 03:31:44 -- host/discovery.sh@46 -- # waitforlisten 298496 /tmp/host.sock 00:18:06.612 03:31:44 -- common/autotest_common.sh@817 -- # '[' -z 298496 ']' 00:18:06.612 03:31:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:06.612 03:31:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.612 03:31:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:06.612 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:06.612 03:31:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.612 03:31:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.612 [2024-04-19 03:31:44.140755] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:18:06.612 [2024-04-19 03:31:44.140819] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298496 ] 00:18:06.612 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.870 [2024-04-19 03:31:44.203536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.870 [2024-04-19 03:31:44.320468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.803 03:31:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.803 03:31:45 -- common/autotest_common.sh@850 -- # return 0 00:18:07.803 03:31:45 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.803 03:31:45 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@72 -- # notify_id=0 00:18:07.803 03:31:45 -- host/discovery.sh@83 -- # get_subsystem_names 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # sort 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # xargs 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:07.803 03:31:45 -- host/discovery.sh@84 -- # get_bdev_list 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # sort 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # xargs 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:07.803 03:31:45 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@87 -- # get_subsystem_names 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # sort 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # xargs 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:07.803 03:31:45 -- host/discovery.sh@88 -- # get_bdev_list 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # sort 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # xargs 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:07.803 03:31:45 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@91 -- # get_subsystem_names 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # sort 00:18:07.803 03:31:45 -- host/discovery.sh@59 -- # xargs 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:07.803 03:31:45 -- host/discovery.sh@92 -- # get_bdev_list 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # sort 00:18:07.803 03:31:45 -- host/discovery.sh@55 -- # xargs 00:18:07.803 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.803 03:31:45 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:07.803 03:31:45 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:07.803 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.803 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 [2024-04-19 03:31:45.365899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.061 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.061 03:31:45 -- host/discovery.sh@97 -- # get_subsystem_names 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:08.061 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.061 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # sort 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # xargs 00:18:08.061 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.061 03:31:45 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:08.061 03:31:45 -- host/discovery.sh@98 -- # get_bdev_list 00:18:08.061 03:31:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.061 03:31:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:08.061 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.061 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 03:31:45 -- host/discovery.sh@55 -- # sort 00:18:08.061 03:31:45 -- host/discovery.sh@55 -- # xargs 00:18:08.061 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.061 03:31:45 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:08.061 03:31:45 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:08.061 03:31:45 -- host/discovery.sh@79 -- # expected_count=0 00:18:08.061 03:31:45 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:08.061 03:31:45 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:08.061 03:31:45 -- common/autotest_common.sh@901 -- # local max=10 00:18:08.061 03:31:45 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:08.061 03:31:45 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:08.061 03:31:45 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:08.061 03:31:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:08.061 03:31:45 -- host/discovery.sh@74 -- # jq '. | length' 00:18:08.061 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.061 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.061 03:31:45 -- host/discovery.sh@74 -- # notification_count=0 00:18:08.061 03:31:45 -- host/discovery.sh@75 -- # notify_id=0 00:18:08.061 03:31:45 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:08.061 03:31:45 -- common/autotest_common.sh@904 -- # return 0 00:18:08.061 03:31:45 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:08.061 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.061 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.061 03:31:45 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:08.061 03:31:45 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:08.061 03:31:45 -- common/autotest_common.sh@901 -- # local max=10 00:18:08.061 03:31:45 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:08.061 03:31:45 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:08.061 03:31:45 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:08.061 03:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # sort 00:18:08.061 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 03:31:45 -- host/discovery.sh@59 -- # xargs 00:18:08.061 03:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.061 03:31:45 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:18:08.061 03:31:45 -- common/autotest_common.sh@906 -- # sleep 1 00:18:08.626 [2024-04-19 03:31:46.143297] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:08.626 [2024-04-19 03:31:46.143329] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:08.626 [2024-04-19 03:31:46.143349] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:08.883 [2024-04-19 03:31:46.230656] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:08.883 [2024-04-19 03:31:46.332562] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:08.883 [2024-04-19 03:31:46.332584] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:09.141 03:31:46 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:09.141 03:31:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.141 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.141 03:31:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.141 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.141 03:31:46 -- host/discovery.sh@59 -- # sort 00:18:09.141 03:31:46 -- host/discovery.sh@59 -- # xargs 00:18:09.141 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.141 03:31:46 -- common/autotest_common.sh@904 -- # return 0 00:18:09.141 03:31:46 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@901 -- # local max=10 00:18:09.141 03:31:46 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:09.141 03:31:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.141 03:31:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.141 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.141 03:31:46 -- host/discovery.sh@55 -- # sort 00:18:09.141 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.141 03:31:46 -- host/discovery.sh@55 -- # xargs 00:18:09.141 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:09.141 03:31:46 -- common/autotest_common.sh@904 -- # return 0 00:18:09.141 03:31:46 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@901 -- # local max=10 00:18:09.141 03:31:46 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:09.141 03:31:46 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:09.141 03:31:46 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:09.141 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.141 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.141 03:31:46 -- host/discovery.sh@63 -- # sort -n 00:18:09.141 03:31:46 -- host/discovery.sh@63 -- # xargs 00:18:09.141 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.141 03:31:46 -- common/autotest_common.sh@904 -- # return 0 00:18:09.141 03:31:46 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:09.141 03:31:46 -- host/discovery.sh@79 -- # expected_count=1 00:18:09.141 03:31:46 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:09.141 03:31:46 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:09.141 03:31:46 -- common/autotest_common.sh@901 -- # local max=10 00:18:09.141 03:31:46 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:09.141 03:31:46 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:09.141 03:31:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:09.141 03:31:46 -- host/discovery.sh@74 -- # jq '. | length' 00:18:09.141 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.141 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.141 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.398 03:31:46 -- host/discovery.sh@74 -- # notification_count=1 00:18:09.398 03:31:46 -- host/discovery.sh@75 -- # notify_id=1 00:18:09.398 03:31:46 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:09.398 03:31:46 -- common/autotest_common.sh@904 -- # return 0 00:18:09.398 03:31:46 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:09.398 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.398 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.398 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.398 03:31:46 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:09.398 03:31:46 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:09.398 03:31:46 -- common/autotest_common.sh@901 -- # local max=10 00:18:09.398 03:31:46 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.398 03:31:46 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:09.398 03:31:46 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:09.398 03:31:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.398 03:31:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.398 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.398 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.398 03:31:46 -- host/discovery.sh@55 -- # sort 00:18:09.398 03:31:46 -- host/discovery.sh@55 -- # xargs 00:18:09.398 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.398 03:31:46 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:09.398 03:31:46 -- common/autotest_common.sh@904 -- # return 0 00:18:09.398 03:31:46 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:09.398 03:31:46 -- host/discovery.sh@79 -- # expected_count=1 00:18:09.399 03:31:46 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:09.399 03:31:46 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:09.399 03:31:46 -- common/autotest_common.sh@901 -- # local max=10 00:18:09.399 03:31:46 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.399 03:31:46 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:09.399 03:31:46 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:09.399 03:31:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:09.399 03:31:46 -- host/discovery.sh@74 -- # jq '. | length' 00:18:09.399 03:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.399 03:31:46 -- common/autotest_common.sh@10 -- # set +x 00:18:09.399 03:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.399 03:31:46 -- host/discovery.sh@74 -- # notification_count=0 00:18:09.399 03:31:46 -- host/discovery.sh@75 -- # notify_id=1 00:18:09.399 03:31:46 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:09.399 03:31:46 -- common/autotest_common.sh@906 -- # sleep 1 00:18:10.330 03:31:47 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:10.330 03:31:47 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:10.330 03:31:47 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:10.330 03:31:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:10.330 03:31:47 -- host/discovery.sh@74 -- # jq '. | length' 00:18:10.330 03:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.330 03:31:47 -- common/autotest_common.sh@10 -- # set +x 00:18:10.330 03:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.330 03:31:47 -- host/discovery.sh@74 -- # notification_count=1 00:18:10.330 03:31:47 -- host/discovery.sh@75 -- # notify_id=2 00:18:10.330 03:31:47 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:10.330 03:31:47 -- common/autotest_common.sh@904 -- # return 0 00:18:10.330 03:31:47 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:10.330 03:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.330 03:31:47 -- common/autotest_common.sh@10 -- # set +x 00:18:10.330 [2024-04-19 03:31:47.853298] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:10.330 [2024-04-19 03:31:47.854290] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:10.330 [2024-04-19 03:31:47.854333] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:10.330 03:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.330 03:31:47 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:10.330 03:31:47 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:10.330 03:31:47 -- common/autotest_common.sh@901 -- # local max=10 00:18:10.330 03:31:47 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:10.330 03:31:47 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:10.330 03:31:47 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:10.330 03:31:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:10.330 03:31:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:10.330 03:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.330 03:31:47 -- common/autotest_common.sh@10 -- # set +x 00:18:10.330 03:31:47 -- host/discovery.sh@59 -- # sort 00:18:10.330 03:31:47 -- host/discovery.sh@59 -- # xargs 00:18:10.330 03:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.587 03:31:47 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.587 03:31:47 -- common/autotest_common.sh@904 -- # return 0 00:18:10.587 03:31:47 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:10.587 03:31:47 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:10.587 03:31:47 -- common/autotest_common.sh@901 -- # local max=10 00:18:10.587 03:31:47 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:10.587 03:31:47 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:10.587 03:31:47 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:10.587 03:31:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.587 03:31:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:10.587 03:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.587 03:31:47 -- common/autotest_common.sh@10 -- # set +x 00:18:10.587 03:31:47 -- host/discovery.sh@55 -- # sort 00:18:10.587 03:31:47 -- host/discovery.sh@55 -- # xargs 00:18:10.587 03:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.587 [2024-04-19 03:31:47.941717] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:10.587 03:31:47 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:10.587 03:31:47 -- common/autotest_common.sh@904 -- # return 0 00:18:10.588 03:31:47 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:10.588 03:31:47 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:10.588 03:31:47 -- common/autotest_common.sh@901 -- # local max=10 00:18:10.588 03:31:47 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:10.588 03:31:47 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:10.588 03:31:47 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:10.588 03:31:47 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:10.588 03:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.588 03:31:47 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:10.588 03:31:47 -- common/autotest_common.sh@10 -- # set +x 00:18:10.588 03:31:47 -- host/discovery.sh@63 -- # sort -n 00:18:10.588 03:31:47 -- host/discovery.sh@63 -- # xargs 00:18:10.588 03:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.588 03:31:47 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:10.588 03:31:47 -- common/autotest_common.sh@906 -- # sleep 1 00:18:10.588 [2024-04-19 03:31:48.040458] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:10.588 [2024-04-19 03:31:48.040479] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:10.588 [2024-04-19 03:31:48.040488] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:11.520 03:31:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.520 03:31:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:11.520 03:31:48 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:11.520 03:31:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:11.520 03:31:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:11.520 03:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.520 03:31:48 -- host/discovery.sh@63 -- # sort -n 00:18:11.520 03:31:48 -- common/autotest_common.sh@10 -- # set +x 00:18:11.520 03:31:48 -- host/discovery.sh@63 -- # xargs 00:18:11.521 03:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.521 03:31:49 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:11.521 03:31:49 -- common/autotest_common.sh@904 -- # return 0 00:18:11.521 03:31:49 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:11.521 03:31:49 -- host/discovery.sh@79 -- # expected_count=0 00:18:11.521 03:31:49 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:11.521 03:31:49 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:11.521 03:31:49 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.521 03:31:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.521 03:31:49 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:11.521 03:31:49 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:11.521 03:31:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:11.521 03:31:49 -- host/discovery.sh@74 -- # jq '. | length' 00:18:11.521 03:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.521 03:31:49 -- common/autotest_common.sh@10 -- # set +x 00:18:11.521 03:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.779 03:31:49 -- host/discovery.sh@74 -- # notification_count=0 00:18:11.779 03:31:49 -- host/discovery.sh@75 -- # notify_id=2 00:18:11.779 03:31:49 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:11.779 03:31:49 -- common/autotest_common.sh@904 -- # return 0 00:18:11.779 03:31:49 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:11.779 03:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.779 03:31:49 -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 [2024-04-19 03:31:49.085934] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:11.779 [2024-04-19 03:31:49.085971] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:11.779 [2024-04-19 03:31:49.086418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.779 [2024-04-19 03:31:49.086476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.779 [2024-04-19 03:31:49.086493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.779 [2024-04-19 03:31:49.086507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.779 [2024-04-19 03:31:49.086521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.779 [2024-04-19 03:31:49.086537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.779 [2024-04-19 03:31:49.086552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.779 [2024-04-19 03:31:49.086566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.779 [2024-04-19 03:31:49.086580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.779 03:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.779 03:31:49 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:11.779 03:31:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:11.779 03:31:49 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.779 03:31:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.779 03:31:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:11.779 03:31:49 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:11.779 03:31:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.779 03:31:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.779 03:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.779 03:31:49 -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 03:31:49 -- host/discovery.sh@59 -- # sort 00:18:11.779 03:31:49 -- host/discovery.sh@59 -- # xargs 00:18:11.779 [2024-04-19 03:31:49.096445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.779 03:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.779 [2024-04-19 03:31:49.106468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.779 [2024-04-19 03:31:49.106709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.779 [2024-04-19 03:31:49.106863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.779 [2024-04-19 03:31:49.106890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.779 [2024-04-19 03:31:49.106907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.779 [2024-04-19 03:31:49.106930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.779 [2024-04-19 03:31:49.106968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.779 [2024-04-19 03:31:49.106987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.779 [2024-04-19 03:31:49.107003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.779 [2024-04-19 03:31:49.107023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.779 [2024-04-19 03:31:49.116547] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.779 [2024-04-19 03:31:49.116741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.779 [2024-04-19 03:31:49.116911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.779 [2024-04-19 03:31:49.116936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.779 [2024-04-19 03:31:49.116953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.779 [2024-04-19 03:31:49.116975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.779 [2024-04-19 03:31:49.117008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.117027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.117041] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.117060] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 [2024-04-19 03:31:49.126617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.126910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.127051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.127077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.127094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.127116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.127150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.127169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.127189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.127209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 03:31:49 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.780 03:31:49 -- common/autotest_common.sh@904 -- # return 0 00:18:11.780 03:31:49 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.780 03:31:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.780 03:31:49 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.780 03:31:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.780 03:31:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:11.780 03:31:49 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:11.780 [2024-04-19 03:31:49.136687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.136907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.137072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.137099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.137115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.137137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.137158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.137173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.137186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.137218] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 03:31:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.780 03:31:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.780 03:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.780 03:31:49 -- host/discovery.sh@55 -- # sort 00:18:11.780 03:31:49 -- common/autotest_common.sh@10 -- # set +x 00:18:11.780 03:31:49 -- host/discovery.sh@55 -- # xargs 00:18:11.780 [2024-04-19 03:31:49.146773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.147057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.147204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.147230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.147247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.147269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.147303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.147323] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.147337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.147356] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 [2024-04-19 03:31:49.156846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.157133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.157303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.157329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.157346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.157367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.157411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.157431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.157445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.157464] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 03:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.780 [2024-04-19 03:31:49.166913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.167110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.167308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.167334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.167350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.167373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.167402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.167417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.167430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.167449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 [2024-04-19 03:31:49.176980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.177214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.177410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.177449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.177466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.177489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.177523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.177542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.177556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.177575] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 03:31:49 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:11.780 03:31:49 -- common/autotest_common.sh@904 -- # return 0 00:18:11.780 03:31:49 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:11.780 03:31:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:11.780 03:31:49 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.780 03:31:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.780 03:31:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:11.780 03:31:49 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:11.780 03:31:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:11.780 03:31:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:11.780 03:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.780 03:31:49 -- host/discovery.sh@63 -- # sort -n 00:18:11.780 03:31:49 -- common/autotest_common.sh@10 -- # set +x 00:18:11.780 03:31:49 -- host/discovery.sh@63 -- # xargs 00:18:11.780 [2024-04-19 03:31:49.187050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.187244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.187416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.780 [2024-04-19 03:31:49.187443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.780 [2024-04-19 03:31:49.187460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.780 [2024-04-19 03:31:49.187481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.780 [2024-04-19 03:31:49.187502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.780 [2024-04-19 03:31:49.187517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.780 [2024-04-19 03:31:49.187532] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.780 [2024-04-19 03:31:49.187550] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.780 03:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.780 [2024-04-19 03:31:49.197117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.780 [2024-04-19 03:31:49.197409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.781 [2024-04-19 03:31:49.197572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.781 [2024-04-19 03:31:49.197598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.781 [2024-04-19 03:31:49.197615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.781 [2024-04-19 03:31:49.197638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.781 [2024-04-19 03:31:49.197671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.781 [2024-04-19 03:31:49.197689] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.781 [2024-04-19 03:31:49.197703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.781 [2024-04-19 03:31:49.197722] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.781 [2024-04-19 03:31:49.207187] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:11.781 [2024-04-19 03:31:49.207414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.781 [2024-04-19 03:31:49.207590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.781 [2024-04-19 03:31:49.207617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59a30 with addr=10.0.0.2, port=4420 00:18:11.781 [2024-04-19 03:31:49.207633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59a30 is same with the state(5) to be set 00:18:11.781 [2024-04-19 03:31:49.207665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59a30 (9): Bad file descriptor 00:18:11.781 [2024-04-19 03:31:49.207701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:11.781 [2024-04-19 03:31:49.207719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:11.781 [2024-04-19 03:31:49.207733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:11.781 [2024-04-19 03:31:49.207779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.781 [2024-04-19 03:31:49.213750] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:11.781 [2024-04-19 03:31:49.213780] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:11.781 03:31:49 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:18:11.781 03:31:49 -- common/autotest_common.sh@906 -- # sleep 1 00:18:12.715 03:31:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.715 03:31:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:12.715 03:31:50 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:12.715 03:31:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:12.715 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.715 03:31:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:12.715 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.715 03:31:50 -- host/discovery.sh@63 -- # sort -n 00:18:12.715 03:31:50 -- host/discovery.sh@63 -- # xargs 00:18:12.715 03:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:18:12.972 03:31:50 -- common/autotest_common.sh@904 -- # return 0 00:18:12.972 03:31:50 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:12.972 03:31:50 -- host/discovery.sh@79 -- # expected_count=0 00:18:12.972 03:31:50 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:12.972 03:31:50 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:12.972 03:31:50 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.972 03:31:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:12.972 03:31:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:12.972 03:31:50 -- host/discovery.sh@74 -- # jq '. | length' 00:18:12.972 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.972 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.972 03:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.972 03:31:50 -- host/discovery.sh@74 -- # notification_count=0 00:18:12.972 03:31:50 -- host/discovery.sh@75 -- # notify_id=2 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:12.972 03:31:50 -- common/autotest_common.sh@904 -- # return 0 00:18:12.972 03:31:50 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:12.972 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.972 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.972 03:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.972 03:31:50 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:12.972 03:31:50 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:12.972 03:31:50 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.972 03:31:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:12.972 03:31:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:12.972 03:31:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:12.972 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.972 03:31:50 -- host/discovery.sh@59 -- # sort 00:18:12.972 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.972 03:31:50 -- host/discovery.sh@59 -- # xargs 00:18:12.972 03:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:12.972 03:31:50 -- common/autotest_common.sh@904 -- # return 0 00:18:12.972 03:31:50 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:12.972 03:31:50 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:12.972 03:31:50 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.972 03:31:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:12.972 03:31:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.972 03:31:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:12.972 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.972 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.972 03:31:50 -- host/discovery.sh@55 -- # sort 00:18:12.972 03:31:50 -- host/discovery.sh@55 -- # xargs 00:18:12.972 03:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:12.972 03:31:50 -- common/autotest_common.sh@904 -- # return 0 00:18:12.972 03:31:50 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:12.972 03:31:50 -- host/discovery.sh@79 -- # expected_count=2 00:18:12.972 03:31:50 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:12.972 03:31:50 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:12.972 03:31:50 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.972 03:31:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:12.972 03:31:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:12.972 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.972 03:31:50 -- host/discovery.sh@74 -- # jq '. | length' 00:18:12.972 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.972 03:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.972 03:31:50 -- host/discovery.sh@74 -- # notification_count=2 00:18:12.972 03:31:50 -- host/discovery.sh@75 -- # notify_id=4 00:18:12.972 03:31:50 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:12.972 03:31:50 -- common/autotest_common.sh@904 -- # return 0 00:18:12.972 03:31:50 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:12.972 03:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.972 03:31:50 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 [2024-04-19 03:31:51.516239] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:14.345 [2024-04-19 03:31:51.516269] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:14.345 [2024-04-19 03:31:51.516292] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:14.345 [2024-04-19 03:31:51.604570] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:14.345 [2024-04-19 03:31:51.710031] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:14.345 [2024-04-19 03:31:51.710074] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.345 03:31:51 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.345 03:31:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:14.345 03:31:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.345 03:31:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:14.345 03:31:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.345 03:31:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:14.345 03:31:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.345 03:31:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.345 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.345 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 request: 00:18:14.345 { 00:18:14.345 "name": "nvme", 00:18:14.345 "trtype": "tcp", 00:18:14.345 "traddr": "10.0.0.2", 00:18:14.345 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:14.345 "adrfam": "ipv4", 00:18:14.345 "trsvcid": "8009", 00:18:14.345 "wait_for_attach": true, 00:18:14.345 "method": "bdev_nvme_start_discovery", 00:18:14.345 "req_id": 1 00:18:14.345 } 00:18:14.345 Got JSON-RPC error response 00:18:14.345 response: 00:18:14.345 { 00:18:14.345 "code": -17, 00:18:14.345 "message": "File exists" 00:18:14.345 } 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:14.345 03:31:51 -- common/autotest_common.sh@641 -- # es=1 00:18:14.345 03:31:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:14.345 03:31:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:14.345 03:31:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:14.345 03:31:51 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:14.345 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:14.345 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # sort 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # xargs 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.345 03:31:51 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:14.345 03:31:51 -- host/discovery.sh@146 -- # get_bdev_list 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.345 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:14.345 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # sort 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # xargs 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.345 03:31:51 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:14.345 03:31:51 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.345 03:31:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:14.345 03:31:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.345 03:31:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:14.345 03:31:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.345 03:31:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:14.345 03:31:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.345 03:31:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.345 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.345 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 request: 00:18:14.345 { 00:18:14.345 "name": "nvme_second", 00:18:14.345 "trtype": "tcp", 00:18:14.345 "traddr": "10.0.0.2", 00:18:14.345 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:14.345 "adrfam": "ipv4", 00:18:14.345 "trsvcid": "8009", 00:18:14.345 "wait_for_attach": true, 00:18:14.345 "method": "bdev_nvme_start_discovery", 00:18:14.345 "req_id": 1 00:18:14.345 } 00:18:14.345 Got JSON-RPC error response 00:18:14.345 response: 00:18:14.345 { 00:18:14.345 "code": -17, 00:18:14.345 "message": "File exists" 00:18:14.345 } 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:14.345 03:31:51 -- common/autotest_common.sh@641 -- # es=1 00:18:14.345 03:31:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:14.345 03:31:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:14.345 03:31:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:14.345 03:31:51 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:14.345 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:14.345 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # sort 00:18:14.345 03:31:51 -- host/discovery.sh@67 -- # xargs 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.345 03:31:51 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:14.345 03:31:51 -- host/discovery.sh@152 -- # get_bdev_list 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.345 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.345 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # sort 00:18:14.345 03:31:51 -- host/discovery.sh@55 -- # xargs 00:18:14.345 03:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.603 03:31:51 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:14.603 03:31:51 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:14.603 03:31:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:14.603 03:31:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:14.603 03:31:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:14.603 03:31:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.603 03:31:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:14.603 03:31:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.603 03:31:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:14.603 03:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.603 03:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:15.536 [2024-04-19 03:31:52.921493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.536 [2024-04-19 03:31:52.921702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.536 [2024-04-19 03:31:52.921729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5b330 with addr=10.0.0.2, port=8010 00:18:15.536 [2024-04-19 03:31:52.921751] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:15.536 [2024-04-19 03:31:52.921783] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:15.536 [2024-04-19 03:31:52.921799] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:16.468 [2024-04-19 03:31:53.923950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.468 [2024-04-19 03:31:53.924141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.468 [2024-04-19 03:31:53.924167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5b330 with addr=10.0.0.2, port=8010 00:18:16.468 [2024-04-19 03:31:53.924191] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:16.468 [2024-04-19 03:31:53.924205] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:16.468 [2024-04-19 03:31:53.924217] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:17.400 [2024-04-19 03:31:54.926143] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:17.400 request: 00:18:17.400 { 00:18:17.400 "name": "nvme_second", 00:18:17.400 "trtype": "tcp", 00:18:17.400 "traddr": "10.0.0.2", 00:18:17.400 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "8010", 00:18:17.400 "attach_timeout_ms": 3000, 00:18:17.400 "method": "bdev_nvme_start_discovery", 00:18:17.400 "req_id": 1 00:18:17.400 } 00:18:17.400 Got JSON-RPC error response 00:18:17.400 response: 00:18:17.400 { 00:18:17.400 "code": -110, 00:18:17.400 "message": "Connection timed out" 00:18:17.401 } 00:18:17.401 03:31:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:17.401 03:31:54 -- common/autotest_common.sh@641 -- # es=1 00:18:17.401 03:31:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:17.401 03:31:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:17.401 03:31:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:17.401 03:31:54 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:17.401 03:31:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:17.401 03:31:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:17.401 03:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.401 03:31:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.401 03:31:54 -- host/discovery.sh@67 -- # sort 00:18:17.401 03:31:54 -- host/discovery.sh@67 -- # xargs 00:18:17.401 03:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.659 03:31:54 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:17.659 03:31:54 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:17.659 03:31:54 -- host/discovery.sh@161 -- # kill 298496 00:18:17.659 03:31:54 -- host/discovery.sh@162 -- # nvmftestfini 00:18:17.659 03:31:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:17.659 03:31:54 -- nvmf/common.sh@117 -- # sync 00:18:17.659 03:31:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.659 03:31:54 -- nvmf/common.sh@120 -- # set +e 00:18:17.659 03:31:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.659 03:31:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.659 rmmod nvme_tcp 00:18:17.659 rmmod nvme_fabrics 00:18:17.659 rmmod nvme_keyring 00:18:17.659 03:31:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.659 03:31:55 -- nvmf/common.sh@124 -- # set -e 00:18:17.659 03:31:55 -- nvmf/common.sh@125 -- # return 0 00:18:17.659 03:31:55 -- nvmf/common.sh@478 -- # '[' -n 298466 ']' 00:18:17.659 03:31:55 -- nvmf/common.sh@479 -- # killprocess 298466 00:18:17.659 03:31:55 -- common/autotest_common.sh@936 -- # '[' -z 298466 ']' 00:18:17.659 03:31:55 -- common/autotest_common.sh@940 -- # kill -0 298466 00:18:17.659 03:31:55 -- common/autotest_common.sh@941 -- # uname 00:18:17.659 03:31:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.659 03:31:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 298466 00:18:17.659 03:31:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:17.659 03:31:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:17.659 03:31:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 298466' 00:18:17.659 killing process with pid 298466 00:18:17.659 03:31:55 -- common/autotest_common.sh@955 -- # kill 298466 00:18:17.659 03:31:55 -- common/autotest_common.sh@960 -- # wait 298466 00:18:17.918 03:31:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:17.918 03:31:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:17.918 03:31:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:17.918 03:31:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.918 03:31:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.918 03:31:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.918 03:31:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.918 03:31:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.479 03:31:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.479 00:18:20.479 real 0m15.784s 00:18:20.479 user 0m24.542s 00:18:20.479 sys 0m2.943s 00:18:20.479 03:31:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.479 03:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:20.479 ************************************ 00:18:20.479 END TEST nvmf_discovery 00:18:20.479 ************************************ 00:18:20.479 03:31:57 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:20.479 03:31:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.479 03:31:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.479 03:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:20.479 ************************************ 00:18:20.479 START TEST nvmf_discovery_remove_ifc 00:18:20.479 ************************************ 00:18:20.479 03:31:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:20.479 * Looking for test storage... 00:18:20.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.479 03:31:57 -- nvmf/common.sh@7 -- # uname -s 00:18:20.479 03:31:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.479 03:31:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.479 03:31:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.479 03:31:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.479 03:31:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.479 03:31:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.479 03:31:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.479 03:31:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.479 03:31:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.479 03:31:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.479 03:31:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:20.479 03:31:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:20.479 03:31:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.479 03:31:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.479 03:31:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.479 03:31:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.479 03:31:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.479 03:31:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.479 03:31:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.479 03:31:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.479 03:31:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.479 03:31:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.479 03:31:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.479 03:31:57 -- paths/export.sh@5 -- # export PATH 00:18:20.479 03:31:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.479 03:31:57 -- nvmf/common.sh@47 -- # : 0 00:18:20.479 03:31:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.479 03:31:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.479 03:31:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.479 03:31:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.479 03:31:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.479 03:31:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.479 03:31:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.479 03:31:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:20.479 03:31:57 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:20.479 03:31:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:20.479 03:31:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.479 03:31:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:20.479 03:31:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:20.479 03:31:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:20.479 03:31:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.479 03:31:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.479 03:31:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.479 03:31:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:20.479 03:31:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:20.479 03:31:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.480 03:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:22.381 03:31:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:22.381 03:31:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.381 03:31:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.381 03:31:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.381 03:31:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.381 03:31:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.382 03:31:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.382 03:31:59 -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.382 03:31:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.382 03:31:59 -- nvmf/common.sh@296 -- # e810=() 00:18:22.382 03:31:59 -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.382 03:31:59 -- nvmf/common.sh@297 -- # x722=() 00:18:22.382 03:31:59 -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.382 03:31:59 -- nvmf/common.sh@298 -- # mlx=() 00:18:22.382 03:31:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.382 03:31:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.382 03:31:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.382 03:31:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.382 03:31:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.382 03:31:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.382 03:31:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:22.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:22.382 03:31:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.382 03:31:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:22.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:22.382 03:31:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.382 03:31:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.382 03:31:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.382 03:31:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:22.382 03:31:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.382 03:31:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:22.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:22.382 03:31:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.382 03:31:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.382 03:31:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.382 03:31:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:22.382 03:31:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.382 03:31:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:22.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:22.382 03:31:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.382 03:31:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:22.382 03:31:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:22.382 03:31:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:22.382 03:31:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.382 03:31:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.382 03:31:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.382 03:31:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.382 03:31:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.382 03:31:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.382 03:31:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.382 03:31:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.382 03:31:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.382 03:31:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.382 03:31:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.382 03:31:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.382 03:31:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.382 03:31:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.382 03:31:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.382 03:31:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.382 03:31:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.382 03:31:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.382 03:31:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.382 03:31:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:18:22.382 00:18:22.382 --- 10.0.0.2 ping statistics --- 00:18:22.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.382 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:22.382 03:31:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:22.382 00:18:22.382 --- 10.0.0.1 ping statistics --- 00:18:22.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.382 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:22.382 03:31:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.382 03:31:59 -- nvmf/common.sh@411 -- # return 0 00:18:22.382 03:31:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:22.382 03:31:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.382 03:31:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:22.382 03:31:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.382 03:31:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:22.382 03:31:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:22.382 03:31:59 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:22.382 03:31:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:22.382 03:31:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:22.382 03:31:59 -- common/autotest_common.sh@10 -- # set +x 00:18:22.382 03:31:59 -- nvmf/common.sh@470 -- # nvmfpid=301928 00:18:22.382 03:31:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.382 03:31:59 -- nvmf/common.sh@471 -- # waitforlisten 301928 00:18:22.382 03:31:59 -- common/autotest_common.sh@817 -- # '[' -z 301928 ']' 00:18:22.382 03:31:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.382 03:31:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.382 03:31:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.382 03:31:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.382 03:31:59 -- common/autotest_common.sh@10 -- # set +x 00:18:22.382 [2024-04-19 03:31:59.826050] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:18:22.382 [2024-04-19 03:31:59.826142] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.382 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.382 [2024-04-19 03:31:59.892907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.640 [2024-04-19 03:32:00.010113] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.640 [2024-04-19 03:32:00.010189] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.640 [2024-04-19 03:32:00.010213] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.640 [2024-04-19 03:32:00.010226] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.640 [2024-04-19 03:32:00.010238] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.640 [2024-04-19 03:32:00.010279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.574 03:32:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.574 03:32:00 -- common/autotest_common.sh@850 -- # return 0 00:18:23.574 03:32:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:23.574 03:32:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:23.574 03:32:00 -- common/autotest_common.sh@10 -- # set +x 00:18:23.574 03:32:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.574 03:32:00 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:23.574 03:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.574 03:32:00 -- common/autotest_common.sh@10 -- # set +x 00:18:23.574 [2024-04-19 03:32:00.812985] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.574 [2024-04-19 03:32:00.821155] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:23.574 null0 00:18:23.574 [2024-04-19 03:32:00.853122] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.574 03:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.574 03:32:00 -- host/discovery_remove_ifc.sh@59 -- # hostpid=302079 00:18:23.574 03:32:00 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:23.574 03:32:00 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 302079 /tmp/host.sock 00:18:23.574 03:32:00 -- common/autotest_common.sh@817 -- # '[' -z 302079 ']' 00:18:23.574 03:32:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:23.574 03:32:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.574 03:32:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:23.574 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:23.574 03:32:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.574 03:32:00 -- common/autotest_common.sh@10 -- # set +x 00:18:23.574 [2024-04-19 03:32:00.915656] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:18:23.574 [2024-04-19 03:32:00.915735] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302079 ] 00:18:23.574 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.574 [2024-04-19 03:32:00.975020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.574 [2024-04-19 03:32:01.090596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.574 03:32:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.574 03:32:01 -- common/autotest_common.sh@850 -- # return 0 00:18:23.574 03:32:01 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.574 03:32:01 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:23.574 03:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.574 03:32:01 -- common/autotest_common.sh@10 -- # set +x 00:18:23.574 03:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.574 03:32:01 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:23.574 03:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.574 03:32:01 -- common/autotest_common.sh@10 -- # set +x 00:18:23.832 03:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.832 03:32:01 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:23.832 03:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.832 03:32:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.799 [2024-04-19 03:32:02.282248] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:24.799 [2024-04-19 03:32:02.282276] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:24.799 [2024-04-19 03:32:02.282300] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:25.057 [2024-04-19 03:32:02.409748] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:25.058 [2024-04-19 03:32:02.512441] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:25.058 [2024-04-19 03:32:02.512520] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:25.058 [2024-04-19 03:32:02.512555] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:25.058 [2024-04-19 03:32:02.512576] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:25.058 [2024-04-19 03:32:02.512598] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:25.058 03:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.058 03:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.058 03:32:02 -- common/autotest_common.sh@10 -- # set +x 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.058 [2024-04-19 03:32:02.519307] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23372d0 was disconnected and freed. delete nvme_qpair. 00:18:25.058 03:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.058 03:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.058 03:32:02 -- common/autotest_common.sh@10 -- # set +x 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.058 03:32:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.058 03:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.316 03:32:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:25.316 03:32:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.248 03:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.248 03:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.248 03:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:26.248 03:32:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.180 03:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:27.180 03:32:04 -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:27.180 03:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:27.180 03:32:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.553 03:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.553 03:32:05 -- common/autotest_common.sh@10 -- # set +x 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.553 03:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:28.553 03:32:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:29.485 03:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.485 03:32:06 -- common/autotest_common.sh@10 -- # set +x 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:29.485 03:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:29.485 03:32:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.417 03:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.417 03:32:07 -- common/autotest_common.sh@10 -- # set +x 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.417 03:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:30.417 03:32:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.417 [2024-04-19 03:32:07.953622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:30.417 [2024-04-19 03:32:07.953704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.417 [2024-04-19 03:32:07.953725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.417 [2024-04-19 03:32:07.953742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.417 [2024-04-19 03:32:07.953755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.417 [2024-04-19 03:32:07.953768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.417 [2024-04-19 03:32:07.953780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.417 [2024-04-19 03:32:07.953792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.417 [2024-04-19 03:32:07.953805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.417 [2024-04-19 03:32:07.953818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.417 [2024-04-19 03:32:07.953830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.417 [2024-04-19 03:32:07.953842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fd7b0 is same with the state(5) to be set 00:18:30.417 [2024-04-19 03:32:07.963639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fd7b0 (9): Bad file descriptor 00:18:30.418 [2024-04-19 03:32:07.973696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:31.350 03:32:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:31.350 03:32:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.350 03:32:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:31.350 03:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.350 03:32:08 -- common/autotest_common.sh@10 -- # set +x 00:18:31.350 03:32:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:31.350 03:32:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:31.607 [2024-04-19 03:32:09.010419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:32.541 [2024-04-19 03:32:10.034454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:32.541 [2024-04-19 03:32:10.034544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fd7b0 with addr=10.0.0.2, port=4420 00:18:32.541 [2024-04-19 03:32:10.034571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fd7b0 is same with the state(5) to be set 00:18:32.541 [2024-04-19 03:32:10.035063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fd7b0 (9): Bad file descriptor 00:18:32.541 [2024-04-19 03:32:10.035111] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.541 [2024-04-19 03:32:10.035151] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:32.541 [2024-04-19 03:32:10.035196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.542 [2024-04-19 03:32:10.035219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.542 [2024-04-19 03:32:10.035249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.542 [2024-04-19 03:32:10.035265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.542 [2024-04-19 03:32:10.035280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.542 [2024-04-19 03:32:10.035295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.542 [2024-04-19 03:32:10.035311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.542 [2024-04-19 03:32:10.035326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.542 [2024-04-19 03:32:10.035342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.542 [2024-04-19 03:32:10.035357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.542 [2024-04-19 03:32:10.035372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:32.542 [2024-04-19 03:32:10.035607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fdbc0 (9): Bad file descriptor 00:18:32.542 [2024-04-19 03:32:10.036624] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:32.542 [2024-04-19 03:32:10.036645] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:32.542 03:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:32.542 03:32:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:32.542 03:32:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.914 03:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.914 03:32:11 -- common/autotest_common.sh@10 -- # set +x 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.914 03:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.914 03:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:33.914 03:32:11 -- common/autotest_common.sh@10 -- # set +x 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.914 03:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:33.914 03:32:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:34.853 [2024-04-19 03:32:12.046441] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:34.853 [2024-04-19 03:32:12.046479] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:34.853 [2024-04-19 03:32:12.046499] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:34.853 [2024-04-19 03:32:12.132806] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.853 03:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.853 03:32:12 -- common/autotest_common.sh@10 -- # set +x 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.853 03:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:34.853 03:32:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:34.853 [2024-04-19 03:32:12.360481] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:34.853 [2024-04-19 03:32:12.360524] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:34.853 [2024-04-19 03:32:12.360555] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:34.853 [2024-04-19 03:32:12.360575] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:34.853 [2024-04-19 03:32:12.360586] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:34.853 [2024-04-19 03:32:12.365450] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x230df40 was disconnected and freed. delete nvme_qpair. 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.790 03:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.790 03:32:13 -- common/autotest_common.sh@10 -- # set +x 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.790 03:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:35.790 03:32:13 -- host/discovery_remove_ifc.sh@90 -- # killprocess 302079 00:18:35.790 03:32:13 -- common/autotest_common.sh@936 -- # '[' -z 302079 ']' 00:18:35.790 03:32:13 -- common/autotest_common.sh@940 -- # kill -0 302079 00:18:35.790 03:32:13 -- common/autotest_common.sh@941 -- # uname 00:18:35.790 03:32:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.790 03:32:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 302079 00:18:35.790 03:32:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.790 03:32:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.790 03:32:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 302079' 00:18:35.790 killing process with pid 302079 00:18:35.790 03:32:13 -- common/autotest_common.sh@955 -- # kill 302079 00:18:35.790 03:32:13 -- common/autotest_common.sh@960 -- # wait 302079 00:18:36.049 03:32:13 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:36.049 03:32:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:36.049 03:32:13 -- nvmf/common.sh@117 -- # sync 00:18:36.049 03:32:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.049 03:32:13 -- nvmf/common.sh@120 -- # set +e 00:18:36.049 03:32:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.049 03:32:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.049 rmmod nvme_tcp 00:18:36.049 rmmod nvme_fabrics 00:18:36.309 rmmod nvme_keyring 00:18:36.309 03:32:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.309 03:32:13 -- nvmf/common.sh@124 -- # set -e 00:18:36.309 03:32:13 -- nvmf/common.sh@125 -- # return 0 00:18:36.309 03:32:13 -- nvmf/common.sh@478 -- # '[' -n 301928 ']' 00:18:36.309 03:32:13 -- nvmf/common.sh@479 -- # killprocess 301928 00:18:36.309 03:32:13 -- common/autotest_common.sh@936 -- # '[' -z 301928 ']' 00:18:36.309 03:32:13 -- common/autotest_common.sh@940 -- # kill -0 301928 00:18:36.309 03:32:13 -- common/autotest_common.sh@941 -- # uname 00:18:36.309 03:32:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.309 03:32:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 301928 00:18:36.309 03:32:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.309 03:32:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.309 03:32:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 301928' 00:18:36.309 killing process with pid 301928 00:18:36.309 03:32:13 -- common/autotest_common.sh@955 -- # kill 301928 00:18:36.309 03:32:13 -- common/autotest_common.sh@960 -- # wait 301928 00:18:36.569 03:32:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:36.569 03:32:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:36.569 03:32:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:36.569 03:32:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.569 03:32:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.569 03:32:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.569 03:32:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.569 03:32:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.512 03:32:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.512 00:18:38.512 real 0m18.458s 00:18:38.512 user 0m25.419s 00:18:38.512 sys 0m3.129s 00:18:38.512 03:32:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.512 03:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:38.512 ************************************ 00:18:38.512 END TEST nvmf_discovery_remove_ifc 00:18:38.512 ************************************ 00:18:38.512 03:32:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:38.512 03:32:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:38.512 03:32:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.512 03:32:16 -- common/autotest_common.sh@10 -- # set +x 00:18:38.770 ************************************ 00:18:38.770 START TEST nvmf_identify_kernel_target 00:18:38.770 ************************************ 00:18:38.770 03:32:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:38.770 * Looking for test storage... 00:18:38.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:38.770 03:32:16 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.770 03:32:16 -- nvmf/common.sh@7 -- # uname -s 00:18:38.770 03:32:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.770 03:32:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.770 03:32:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.770 03:32:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.770 03:32:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.770 03:32:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.770 03:32:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.770 03:32:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.770 03:32:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.770 03:32:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.770 03:32:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.770 03:32:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.770 03:32:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.770 03:32:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.770 03:32:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.770 03:32:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.770 03:32:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.770 03:32:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.770 03:32:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.770 03:32:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.770 03:32:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.770 03:32:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.770 03:32:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.770 03:32:16 -- paths/export.sh@5 -- # export PATH 00:18:38.770 03:32:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.770 03:32:16 -- nvmf/common.sh@47 -- # : 0 00:18:38.770 03:32:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.770 03:32:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.770 03:32:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.770 03:32:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.770 03:32:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.770 03:32:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.770 03:32:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.770 03:32:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.770 03:32:16 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:38.770 03:32:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:38.770 03:32:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.770 03:32:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:38.770 03:32:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:38.770 03:32:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:38.770 03:32:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.770 03:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.770 03:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.770 03:32:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:38.770 03:32:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:38.770 03:32:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.770 03:32:16 -- common/autotest_common.sh@10 -- # set +x 00:18:40.673 03:32:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.674 03:32:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.674 03:32:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.674 03:32:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.674 03:32:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.674 03:32:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.674 03:32:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.674 03:32:18 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.674 03:32:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.674 03:32:18 -- nvmf/common.sh@296 -- # e810=() 00:18:40.674 03:32:18 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.674 03:32:18 -- nvmf/common.sh@297 -- # x722=() 00:18:40.674 03:32:18 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.674 03:32:18 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.674 03:32:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.674 03:32:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.674 03:32:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.674 03:32:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.674 03:32:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.674 03:32:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.674 03:32:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:40.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:40.674 03:32:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.674 03:32:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:40.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:40.674 03:32:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.674 03:32:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.674 03:32:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.674 03:32:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.674 03:32:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.674 03:32:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:40.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:40.674 03:32:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.674 03:32:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.674 03:32:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.674 03:32:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.674 03:32:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.674 03:32:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:40.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:40.674 03:32:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.674 03:32:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:40.674 03:32:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:40.674 03:32:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:40.674 03:32:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.674 03:32:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.674 03:32:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.674 03:32:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.674 03:32:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.674 03:32:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.674 03:32:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.674 03:32:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.674 03:32:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.674 03:32:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.674 03:32:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.674 03:32:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.674 03:32:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.674 03:32:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.674 03:32:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.674 03:32:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.674 03:32:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.674 03:32:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.674 03:32:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.674 03:32:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:18:40.674 00:18:40.674 --- 10.0.0.2 ping statistics --- 00:18:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.674 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:18:40.674 03:32:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:18:40.674 00:18:40.674 --- 10.0.0.1 ping statistics --- 00:18:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.674 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:40.674 03:32:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.674 03:32:18 -- nvmf/common.sh@411 -- # return 0 00:18:40.674 03:32:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:40.674 03:32:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.674 03:32:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:40.674 03:32:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.674 03:32:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:40.674 03:32:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:40.933 03:32:18 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:40.933 03:32:18 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:40.933 03:32:18 -- nvmf/common.sh@717 -- # local ip 00:18:40.933 03:32:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:40.933 03:32:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:40.933 03:32:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.933 03:32:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.933 03:32:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:40.933 03:32:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.933 03:32:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:40.933 03:32:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:40.933 03:32:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:40.933 03:32:18 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:40.933 03:32:18 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:40.933 03:32:18 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:40.933 03:32:18 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:40.933 03:32:18 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:40.933 03:32:18 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:40.933 03:32:18 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:40.933 03:32:18 -- nvmf/common.sh@628 -- # local block nvme 00:18:40.933 03:32:18 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:40.933 03:32:18 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:40.933 03:32:18 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:40.933 03:32:18 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:41.868 Waiting for block devices as requested 00:18:41.868 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:41.868 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:42.127 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:42.127 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:42.127 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:42.127 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:42.386 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:42.386 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:42.386 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:42.386 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:42.646 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:42.646 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:42.646 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:42.646 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:42.906 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:42.906 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:42.906 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:43.167 03:32:20 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:43.167 03:32:20 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:43.167 03:32:20 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:43.167 03:32:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:43.167 03:32:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:43.167 03:32:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:43.167 03:32:20 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:43.167 03:32:20 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:43.167 03:32:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:43.167 No valid GPT data, bailing 00:18:43.167 03:32:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:43.167 03:32:20 -- scripts/common.sh@391 -- # pt= 00:18:43.167 03:32:20 -- scripts/common.sh@392 -- # return 1 00:18:43.167 03:32:20 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:43.167 03:32:20 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:43.167 03:32:20 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:43.167 03:32:20 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:43.167 03:32:20 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:43.167 03:32:20 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:43.167 03:32:20 -- nvmf/common.sh@656 -- # echo 1 00:18:43.167 03:32:20 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:43.167 03:32:20 -- nvmf/common.sh@658 -- # echo 1 00:18:43.167 03:32:20 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:43.167 03:32:20 -- nvmf/common.sh@661 -- # echo tcp 00:18:43.167 03:32:20 -- nvmf/common.sh@662 -- # echo 4420 00:18:43.167 03:32:20 -- nvmf/common.sh@663 -- # echo ipv4 00:18:43.167 03:32:20 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:43.167 03:32:20 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:18:43.167 00:18:43.167 Discovery Log Number of Records 2, Generation counter 2 00:18:43.167 =====Discovery Log Entry 0====== 00:18:43.167 trtype: tcp 00:18:43.167 adrfam: ipv4 00:18:43.167 subtype: current discovery subsystem 00:18:43.167 treq: not specified, sq flow control disable supported 00:18:43.167 portid: 1 00:18:43.167 trsvcid: 4420 00:18:43.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:43.167 traddr: 10.0.0.1 00:18:43.167 eflags: none 00:18:43.167 sectype: none 00:18:43.167 =====Discovery Log Entry 1====== 00:18:43.167 trtype: tcp 00:18:43.167 adrfam: ipv4 00:18:43.167 subtype: nvme subsystem 00:18:43.167 treq: not specified, sq flow control disable supported 00:18:43.167 portid: 1 00:18:43.167 trsvcid: 4420 00:18:43.167 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:43.167 traddr: 10.0.0.1 00:18:43.167 eflags: none 00:18:43.167 sectype: none 00:18:43.167 03:32:20 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:43.167 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:43.167 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.167 ===================================================== 00:18:43.167 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:43.167 ===================================================== 00:18:43.167 Controller Capabilities/Features 00:18:43.167 ================================ 00:18:43.167 Vendor ID: 0000 00:18:43.167 Subsystem Vendor ID: 0000 00:18:43.167 Serial Number: 4accc91d41d31e47db92 00:18:43.167 Model Number: Linux 00:18:43.167 Firmware Version: 6.7.0-68 00:18:43.167 Recommended Arb Burst: 0 00:18:43.167 IEEE OUI Identifier: 00 00 00 00:18:43.167 Multi-path I/O 00:18:43.167 May have multiple subsystem ports: No 00:18:43.167 May have multiple controllers: No 00:18:43.167 Associated with SR-IOV VF: No 00:18:43.167 Max Data Transfer Size: Unlimited 00:18:43.167 Max Number of Namespaces: 0 00:18:43.167 Max Number of I/O Queues: 1024 00:18:43.167 NVMe Specification Version (VS): 1.3 00:18:43.167 NVMe Specification Version (Identify): 1.3 00:18:43.167 Maximum Queue Entries: 1024 00:18:43.167 Contiguous Queues Required: No 00:18:43.167 Arbitration Mechanisms Supported 00:18:43.167 Weighted Round Robin: Not Supported 00:18:43.167 Vendor Specific: Not Supported 00:18:43.167 Reset Timeout: 7500 ms 00:18:43.167 Doorbell Stride: 4 bytes 00:18:43.167 NVM Subsystem Reset: Not Supported 00:18:43.167 Command Sets Supported 00:18:43.167 NVM Command Set: Supported 00:18:43.167 Boot Partition: Not Supported 00:18:43.167 Memory Page Size Minimum: 4096 bytes 00:18:43.167 Memory Page Size Maximum: 4096 bytes 00:18:43.167 Persistent Memory Region: Not Supported 00:18:43.167 Optional Asynchronous Events Supported 00:18:43.167 Namespace Attribute Notices: Not Supported 00:18:43.167 Firmware Activation Notices: Not Supported 00:18:43.167 ANA Change Notices: Not Supported 00:18:43.167 PLE Aggregate Log Change Notices: Not Supported 00:18:43.167 LBA Status Info Alert Notices: Not Supported 00:18:43.167 EGE Aggregate Log Change Notices: Not Supported 00:18:43.167 Normal NVM Subsystem Shutdown event: Not Supported 00:18:43.167 Zone Descriptor Change Notices: Not Supported 00:18:43.167 Discovery Log Change Notices: Supported 00:18:43.167 Controller Attributes 00:18:43.167 128-bit Host Identifier: Not Supported 00:18:43.167 Non-Operational Permissive Mode: Not Supported 00:18:43.167 NVM Sets: Not Supported 00:18:43.167 Read Recovery Levels: Not Supported 00:18:43.167 Endurance Groups: Not Supported 00:18:43.167 Predictable Latency Mode: Not Supported 00:18:43.167 Traffic Based Keep ALive: Not Supported 00:18:43.167 Namespace Granularity: Not Supported 00:18:43.167 SQ Associations: Not Supported 00:18:43.167 UUID List: Not Supported 00:18:43.167 Multi-Domain Subsystem: Not Supported 00:18:43.167 Fixed Capacity Management: Not Supported 00:18:43.167 Variable Capacity Management: Not Supported 00:18:43.167 Delete Endurance Group: Not Supported 00:18:43.167 Delete NVM Set: Not Supported 00:18:43.167 Extended LBA Formats Supported: Not Supported 00:18:43.167 Flexible Data Placement Supported: Not Supported 00:18:43.167 00:18:43.167 Controller Memory Buffer Support 00:18:43.167 ================================ 00:18:43.167 Supported: No 00:18:43.167 00:18:43.167 Persistent Memory Region Support 00:18:43.167 ================================ 00:18:43.167 Supported: No 00:18:43.167 00:18:43.167 Admin Command Set Attributes 00:18:43.167 ============================ 00:18:43.167 Security Send/Receive: Not Supported 00:18:43.167 Format NVM: Not Supported 00:18:43.167 Firmware Activate/Download: Not Supported 00:18:43.167 Namespace Management: Not Supported 00:18:43.167 Device Self-Test: Not Supported 00:18:43.167 Directives: Not Supported 00:18:43.167 NVMe-MI: Not Supported 00:18:43.167 Virtualization Management: Not Supported 00:18:43.167 Doorbell Buffer Config: Not Supported 00:18:43.167 Get LBA Status Capability: Not Supported 00:18:43.167 Command & Feature Lockdown Capability: Not Supported 00:18:43.167 Abort Command Limit: 1 00:18:43.167 Async Event Request Limit: 1 00:18:43.167 Number of Firmware Slots: N/A 00:18:43.167 Firmware Slot 1 Read-Only: N/A 00:18:43.167 Firmware Activation Without Reset: N/A 00:18:43.167 Multiple Update Detection Support: N/A 00:18:43.167 Firmware Update Granularity: No Information Provided 00:18:43.167 Per-Namespace SMART Log: No 00:18:43.168 Asymmetric Namespace Access Log Page: Not Supported 00:18:43.168 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:43.168 Command Effects Log Page: Not Supported 00:18:43.168 Get Log Page Extended Data: Supported 00:18:43.168 Telemetry Log Pages: Not Supported 00:18:43.168 Persistent Event Log Pages: Not Supported 00:18:43.168 Supported Log Pages Log Page: May Support 00:18:43.168 Commands Supported & Effects Log Page: Not Supported 00:18:43.168 Feature Identifiers & Effects Log Page:May Support 00:18:43.168 NVMe-MI Commands & Effects Log Page: May Support 00:18:43.168 Data Area 4 for Telemetry Log: Not Supported 00:18:43.168 Error Log Page Entries Supported: 1 00:18:43.168 Keep Alive: Not Supported 00:18:43.168 00:18:43.168 NVM Command Set Attributes 00:18:43.168 ========================== 00:18:43.168 Submission Queue Entry Size 00:18:43.168 Max: 1 00:18:43.168 Min: 1 00:18:43.168 Completion Queue Entry Size 00:18:43.168 Max: 1 00:18:43.168 Min: 1 00:18:43.168 Number of Namespaces: 0 00:18:43.168 Compare Command: Not Supported 00:18:43.168 Write Uncorrectable Command: Not Supported 00:18:43.168 Dataset Management Command: Not Supported 00:18:43.168 Write Zeroes Command: Not Supported 00:18:43.168 Set Features Save Field: Not Supported 00:18:43.168 Reservations: Not Supported 00:18:43.168 Timestamp: Not Supported 00:18:43.168 Copy: Not Supported 00:18:43.168 Volatile Write Cache: Not Present 00:18:43.168 Atomic Write Unit (Normal): 1 00:18:43.168 Atomic Write Unit (PFail): 1 00:18:43.168 Atomic Compare & Write Unit: 1 00:18:43.168 Fused Compare & Write: Not Supported 00:18:43.168 Scatter-Gather List 00:18:43.168 SGL Command Set: Supported 00:18:43.168 SGL Keyed: Not Supported 00:18:43.168 SGL Bit Bucket Descriptor: Not Supported 00:18:43.168 SGL Metadata Pointer: Not Supported 00:18:43.168 Oversized SGL: Not Supported 00:18:43.168 SGL Metadata Address: Not Supported 00:18:43.168 SGL Offset: Supported 00:18:43.168 Transport SGL Data Block: Not Supported 00:18:43.168 Replay Protected Memory Block: Not Supported 00:18:43.168 00:18:43.168 Firmware Slot Information 00:18:43.168 ========================= 00:18:43.168 Active slot: 0 00:18:43.168 00:18:43.168 00:18:43.168 Error Log 00:18:43.168 ========= 00:18:43.168 00:18:43.168 Active Namespaces 00:18:43.168 ================= 00:18:43.168 Discovery Log Page 00:18:43.168 ================== 00:18:43.168 Generation Counter: 2 00:18:43.168 Number of Records: 2 00:18:43.168 Record Format: 0 00:18:43.168 00:18:43.168 Discovery Log Entry 0 00:18:43.168 ---------------------- 00:18:43.168 Transport Type: 3 (TCP) 00:18:43.168 Address Family: 1 (IPv4) 00:18:43.168 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:43.168 Entry Flags: 00:18:43.168 Duplicate Returned Information: 0 00:18:43.168 Explicit Persistent Connection Support for Discovery: 0 00:18:43.168 Transport Requirements: 00:18:43.168 Secure Channel: Not Specified 00:18:43.168 Port ID: 1 (0x0001) 00:18:43.168 Controller ID: 65535 (0xffff) 00:18:43.168 Admin Max SQ Size: 32 00:18:43.168 Transport Service Identifier: 4420 00:18:43.168 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:43.168 Transport Address: 10.0.0.1 00:18:43.168 Discovery Log Entry 1 00:18:43.168 ---------------------- 00:18:43.168 Transport Type: 3 (TCP) 00:18:43.168 Address Family: 1 (IPv4) 00:18:43.168 Subsystem Type: 2 (NVM Subsystem) 00:18:43.168 Entry Flags: 00:18:43.168 Duplicate Returned Information: 0 00:18:43.168 Explicit Persistent Connection Support for Discovery: 0 00:18:43.168 Transport Requirements: 00:18:43.168 Secure Channel: Not Specified 00:18:43.168 Port ID: 1 (0x0001) 00:18:43.168 Controller ID: 65535 (0xffff) 00:18:43.168 Admin Max SQ Size: 32 00:18:43.168 Transport Service Identifier: 4420 00:18:43.168 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:43.168 Transport Address: 10.0.0.1 00:18:43.168 03:32:20 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:43.168 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.168 get_feature(0x01) failed 00:18:43.168 get_feature(0x02) failed 00:18:43.168 get_feature(0x04) failed 00:18:43.168 ===================================================== 00:18:43.168 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:43.168 ===================================================== 00:18:43.168 Controller Capabilities/Features 00:18:43.168 ================================ 00:18:43.168 Vendor ID: 0000 00:18:43.168 Subsystem Vendor ID: 0000 00:18:43.168 Serial Number: 2aff3a8c82ec5b017106 00:18:43.168 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:43.168 Firmware Version: 6.7.0-68 00:18:43.168 Recommended Arb Burst: 6 00:18:43.168 IEEE OUI Identifier: 00 00 00 00:18:43.168 Multi-path I/O 00:18:43.168 May have multiple subsystem ports: Yes 00:18:43.168 May have multiple controllers: Yes 00:18:43.168 Associated with SR-IOV VF: No 00:18:43.168 Max Data Transfer Size: Unlimited 00:18:43.168 Max Number of Namespaces: 1024 00:18:43.168 Max Number of I/O Queues: 128 00:18:43.168 NVMe Specification Version (VS): 1.3 00:18:43.168 NVMe Specification Version (Identify): 1.3 00:18:43.168 Maximum Queue Entries: 1024 00:18:43.168 Contiguous Queues Required: No 00:18:43.168 Arbitration Mechanisms Supported 00:18:43.168 Weighted Round Robin: Not Supported 00:18:43.168 Vendor Specific: Not Supported 00:18:43.168 Reset Timeout: 7500 ms 00:18:43.168 Doorbell Stride: 4 bytes 00:18:43.168 NVM Subsystem Reset: Not Supported 00:18:43.168 Command Sets Supported 00:18:43.168 NVM Command Set: Supported 00:18:43.168 Boot Partition: Not Supported 00:18:43.168 Memory Page Size Minimum: 4096 bytes 00:18:43.168 Memory Page Size Maximum: 4096 bytes 00:18:43.168 Persistent Memory Region: Not Supported 00:18:43.168 Optional Asynchronous Events Supported 00:18:43.168 Namespace Attribute Notices: Supported 00:18:43.168 Firmware Activation Notices: Not Supported 00:18:43.168 ANA Change Notices: Supported 00:18:43.168 PLE Aggregate Log Change Notices: Not Supported 00:18:43.168 LBA Status Info Alert Notices: Not Supported 00:18:43.168 EGE Aggregate Log Change Notices: Not Supported 00:18:43.168 Normal NVM Subsystem Shutdown event: Not Supported 00:18:43.168 Zone Descriptor Change Notices: Not Supported 00:18:43.168 Discovery Log Change Notices: Not Supported 00:18:43.168 Controller Attributes 00:18:43.168 128-bit Host Identifier: Supported 00:18:43.168 Non-Operational Permissive Mode: Not Supported 00:18:43.168 NVM Sets: Not Supported 00:18:43.168 Read Recovery Levels: Not Supported 00:18:43.168 Endurance Groups: Not Supported 00:18:43.168 Predictable Latency Mode: Not Supported 00:18:43.168 Traffic Based Keep ALive: Supported 00:18:43.168 Namespace Granularity: Not Supported 00:18:43.168 SQ Associations: Not Supported 00:18:43.168 UUID List: Not Supported 00:18:43.168 Multi-Domain Subsystem: Not Supported 00:18:43.168 Fixed Capacity Management: Not Supported 00:18:43.169 Variable Capacity Management: Not Supported 00:18:43.169 Delete Endurance Group: Not Supported 00:18:43.169 Delete NVM Set: Not Supported 00:18:43.169 Extended LBA Formats Supported: Not Supported 00:18:43.169 Flexible Data Placement Supported: Not Supported 00:18:43.169 00:18:43.169 Controller Memory Buffer Support 00:18:43.169 ================================ 00:18:43.169 Supported: No 00:18:43.169 00:18:43.169 Persistent Memory Region Support 00:18:43.169 ================================ 00:18:43.169 Supported: No 00:18:43.169 00:18:43.169 Admin Command Set Attributes 00:18:43.169 ============================ 00:18:43.169 Security Send/Receive: Not Supported 00:18:43.169 Format NVM: Not Supported 00:18:43.169 Firmware Activate/Download: Not Supported 00:18:43.169 Namespace Management: Not Supported 00:18:43.169 Device Self-Test: Not Supported 00:18:43.169 Directives: Not Supported 00:18:43.169 NVMe-MI: Not Supported 00:18:43.169 Virtualization Management: Not Supported 00:18:43.169 Doorbell Buffer Config: Not Supported 00:18:43.169 Get LBA Status Capability: Not Supported 00:18:43.169 Command & Feature Lockdown Capability: Not Supported 00:18:43.169 Abort Command Limit: 4 00:18:43.169 Async Event Request Limit: 4 00:18:43.169 Number of Firmware Slots: N/A 00:18:43.169 Firmware Slot 1 Read-Only: N/A 00:18:43.169 Firmware Activation Without Reset: N/A 00:18:43.169 Multiple Update Detection Support: N/A 00:18:43.169 Firmware Update Granularity: No Information Provided 00:18:43.169 Per-Namespace SMART Log: Yes 00:18:43.169 Asymmetric Namespace Access Log Page: Supported 00:18:43.169 ANA Transition Time : 10 sec 00:18:43.169 00:18:43.169 Asymmetric Namespace Access Capabilities 00:18:43.169 ANA Optimized State : Supported 00:18:43.169 ANA Non-Optimized State : Supported 00:18:43.169 ANA Inaccessible State : Supported 00:18:43.169 ANA Persistent Loss State : Supported 00:18:43.169 ANA Change State : Supported 00:18:43.169 ANAGRPID is not changed : No 00:18:43.169 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:43.169 00:18:43.169 ANA Group Identifier Maximum : 128 00:18:43.169 Number of ANA Group Identifiers : 128 00:18:43.169 Max Number of Allowed Namespaces : 1024 00:18:43.169 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:43.169 Command Effects Log Page: Supported 00:18:43.169 Get Log Page Extended Data: Supported 00:18:43.169 Telemetry Log Pages: Not Supported 00:18:43.169 Persistent Event Log Pages: Not Supported 00:18:43.169 Supported Log Pages Log Page: May Support 00:18:43.169 Commands Supported & Effects Log Page: Not Supported 00:18:43.169 Feature Identifiers & Effects Log Page:May Support 00:18:43.169 NVMe-MI Commands & Effects Log Page: May Support 00:18:43.169 Data Area 4 for Telemetry Log: Not Supported 00:18:43.169 Error Log Page Entries Supported: 128 00:18:43.169 Keep Alive: Supported 00:18:43.169 Keep Alive Granularity: 1000 ms 00:18:43.169 00:18:43.169 NVM Command Set Attributes 00:18:43.169 ========================== 00:18:43.169 Submission Queue Entry Size 00:18:43.169 Max: 64 00:18:43.169 Min: 64 00:18:43.169 Completion Queue Entry Size 00:18:43.169 Max: 16 00:18:43.169 Min: 16 00:18:43.169 Number of Namespaces: 1024 00:18:43.169 Compare Command: Not Supported 00:18:43.169 Write Uncorrectable Command: Not Supported 00:18:43.169 Dataset Management Command: Supported 00:18:43.169 Write Zeroes Command: Supported 00:18:43.169 Set Features Save Field: Not Supported 00:18:43.169 Reservations: Not Supported 00:18:43.169 Timestamp: Not Supported 00:18:43.169 Copy: Not Supported 00:18:43.169 Volatile Write Cache: Present 00:18:43.169 Atomic Write Unit (Normal): 1 00:18:43.169 Atomic Write Unit (PFail): 1 00:18:43.169 Atomic Compare & Write Unit: 1 00:18:43.169 Fused Compare & Write: Not Supported 00:18:43.169 Scatter-Gather List 00:18:43.169 SGL Command Set: Supported 00:18:43.169 SGL Keyed: Not Supported 00:18:43.169 SGL Bit Bucket Descriptor: Not Supported 00:18:43.169 SGL Metadata Pointer: Not Supported 00:18:43.169 Oversized SGL: Not Supported 00:18:43.169 SGL Metadata Address: Not Supported 00:18:43.169 SGL Offset: Supported 00:18:43.169 Transport SGL Data Block: Not Supported 00:18:43.169 Replay Protected Memory Block: Not Supported 00:18:43.169 00:18:43.169 Firmware Slot Information 00:18:43.169 ========================= 00:18:43.169 Active slot: 0 00:18:43.169 00:18:43.169 Asymmetric Namespace Access 00:18:43.169 =========================== 00:18:43.169 Change Count : 0 00:18:43.169 Number of ANA Group Descriptors : 1 00:18:43.169 ANA Group Descriptor : 0 00:18:43.169 ANA Group ID : 1 00:18:43.169 Number of NSID Values : 1 00:18:43.169 Change Count : 0 00:18:43.169 ANA State : 1 00:18:43.169 Namespace Identifier : 1 00:18:43.169 00:18:43.169 Commands Supported and Effects 00:18:43.169 ============================== 00:18:43.169 Admin Commands 00:18:43.169 -------------- 00:18:43.169 Get Log Page (02h): Supported 00:18:43.169 Identify (06h): Supported 00:18:43.169 Abort (08h): Supported 00:18:43.169 Set Features (09h): Supported 00:18:43.169 Get Features (0Ah): Supported 00:18:43.169 Asynchronous Event Request (0Ch): Supported 00:18:43.169 Keep Alive (18h): Supported 00:18:43.169 I/O Commands 00:18:43.169 ------------ 00:18:43.169 Flush (00h): Supported 00:18:43.169 Write (01h): Supported LBA-Change 00:18:43.169 Read (02h): Supported 00:18:43.169 Write Zeroes (08h): Supported LBA-Change 00:18:43.169 Dataset Management (09h): Supported 00:18:43.169 00:18:43.169 Error Log 00:18:43.169 ========= 00:18:43.169 Entry: 0 00:18:43.169 Error Count: 0x3 00:18:43.169 Submission Queue Id: 0x0 00:18:43.169 Command Id: 0x5 00:18:43.169 Phase Bit: 0 00:18:43.169 Status Code: 0x2 00:18:43.169 Status Code Type: 0x0 00:18:43.169 Do Not Retry: 1 00:18:43.169 Error Location: 0x28 00:18:43.169 LBA: 0x0 00:18:43.169 Namespace: 0x0 00:18:43.169 Vendor Log Page: 0x0 00:18:43.169 ----------- 00:18:43.169 Entry: 1 00:18:43.169 Error Count: 0x2 00:18:43.169 Submission Queue Id: 0x0 00:18:43.169 Command Id: 0x5 00:18:43.169 Phase Bit: 0 00:18:43.169 Status Code: 0x2 00:18:43.169 Status Code Type: 0x0 00:18:43.169 Do Not Retry: 1 00:18:43.169 Error Location: 0x28 00:18:43.169 LBA: 0x0 00:18:43.169 Namespace: 0x0 00:18:43.169 Vendor Log Page: 0x0 00:18:43.169 ----------- 00:18:43.169 Entry: 2 00:18:43.169 Error Count: 0x1 00:18:43.169 Submission Queue Id: 0x0 00:18:43.169 Command Id: 0x4 00:18:43.169 Phase Bit: 0 00:18:43.169 Status Code: 0x2 00:18:43.169 Status Code Type: 0x0 00:18:43.169 Do Not Retry: 1 00:18:43.169 Error Location: 0x28 00:18:43.169 LBA: 0x0 00:18:43.169 Namespace: 0x0 00:18:43.169 Vendor Log Page: 0x0 00:18:43.169 00:18:43.169 Number of Queues 00:18:43.169 ================ 00:18:43.169 Number of I/O Submission Queues: 128 00:18:43.169 Number of I/O Completion Queues: 128 00:18:43.169 00:18:43.169 ZNS Specific Controller Data 00:18:43.169 ============================ 00:18:43.169 Zone Append Size Limit: 0 00:18:43.169 00:18:43.169 00:18:43.169 Active Namespaces 00:18:43.169 ================= 00:18:43.169 get_feature(0x05) failed 00:18:43.169 Namespace ID:1 00:18:43.169 Command Set Identifier: NVM (00h) 00:18:43.169 Deallocate: Supported 00:18:43.169 Deallocated/Unwritten Error: Not Supported 00:18:43.169 Deallocated Read Value: Unknown 00:18:43.169 Deallocate in Write Zeroes: Not Supported 00:18:43.169 Deallocated Guard Field: 0xFFFF 00:18:43.169 Flush: Supported 00:18:43.169 Reservation: Not Supported 00:18:43.169 Namespace Sharing Capabilities: Multiple Controllers 00:18:43.169 Size (in LBAs): 1953525168 (931GiB) 00:18:43.169 Capacity (in LBAs): 1953525168 (931GiB) 00:18:43.169 Utilization (in LBAs): 1953525168 (931GiB) 00:18:43.169 UUID: cac5fcc8-310b-46b9-a259-492398fc6560 00:18:43.169 Thin Provisioning: Not Supported 00:18:43.169 Per-NS Atomic Units: Yes 00:18:43.169 Atomic Boundary Size (Normal): 0 00:18:43.169 Atomic Boundary Size (PFail): 0 00:18:43.169 Atomic Boundary Offset: 0 00:18:43.169 NGUID/EUI64 Never Reused: No 00:18:43.169 ANA group ID: 1 00:18:43.169 Namespace Write Protected: No 00:18:43.169 Number of LBA Formats: 1 00:18:43.169 Current LBA Format: LBA Format #00 00:18:43.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:43.169 00:18:43.169 03:32:20 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:43.169 03:32:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:43.169 03:32:20 -- nvmf/common.sh@117 -- # sync 00:18:43.169 03:32:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.169 03:32:20 -- nvmf/common.sh@120 -- # set +e 00:18:43.169 03:32:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.169 03:32:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.169 rmmod nvme_tcp 00:18:43.430 rmmod nvme_fabrics 00:18:43.430 03:32:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.430 03:32:20 -- nvmf/common.sh@124 -- # set -e 00:18:43.430 03:32:20 -- nvmf/common.sh@125 -- # return 0 00:18:43.430 03:32:20 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:43.430 03:32:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:43.430 03:32:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:43.430 03:32:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:43.430 03:32:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.430 03:32:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.430 03:32:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.430 03:32:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.430 03:32:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.335 03:32:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.335 03:32:22 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:45.335 03:32:22 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:45.335 03:32:22 -- nvmf/common.sh@675 -- # echo 0 00:18:45.335 03:32:22 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:45.335 03:32:22 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:45.335 03:32:22 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:45.335 03:32:22 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:45.335 03:32:22 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:18:45.335 03:32:22 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:18:45.335 03:32:22 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:46.748 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:46.748 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:46.748 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:47.316 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:18:47.574 00:18:47.574 real 0m8.912s 00:18:47.574 user 0m1.854s 00:18:47.574 sys 0m3.105s 00:18:47.574 03:32:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:47.574 03:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:47.574 ************************************ 00:18:47.574 END TEST nvmf_identify_kernel_target 00:18:47.574 ************************************ 00:18:47.574 03:32:25 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:47.574 03:32:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:47.574 03:32:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:47.574 03:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:47.574 ************************************ 00:18:47.574 START TEST nvmf_auth 00:18:47.574 ************************************ 00:18:47.574 03:32:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:47.832 * Looking for test storage... 00:18:47.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:47.832 03:32:25 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.832 03:32:25 -- nvmf/common.sh@7 -- # uname -s 00:18:47.832 03:32:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.832 03:32:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.832 03:32:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.832 03:32:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.832 03:32:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.832 03:32:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.832 03:32:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.832 03:32:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.833 03:32:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.833 03:32:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.833 03:32:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.833 03:32:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.833 03:32:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.833 03:32:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.833 03:32:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.833 03:32:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.833 03:32:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.833 03:32:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.833 03:32:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.833 03:32:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.833 03:32:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.833 03:32:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.833 03:32:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.833 03:32:25 -- paths/export.sh@5 -- # export PATH 00:18:47.833 03:32:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.833 03:32:25 -- nvmf/common.sh@47 -- # : 0 00:18:47.833 03:32:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.833 03:32:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.833 03:32:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.833 03:32:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.833 03:32:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.833 03:32:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.833 03:32:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.833 03:32:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.833 03:32:25 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:47.833 03:32:25 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:47.833 03:32:25 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:47.833 03:32:25 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:47.833 03:32:25 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:47.833 03:32:25 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:47.833 03:32:25 -- host/auth.sh@21 -- # keys=() 00:18:47.833 03:32:25 -- host/auth.sh@77 -- # nvmftestinit 00:18:47.833 03:32:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:47.833 03:32:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.833 03:32:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:47.833 03:32:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:47.833 03:32:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:47.833 03:32:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.833 03:32:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.833 03:32:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.833 03:32:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:47.833 03:32:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:47.833 03:32:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.833 03:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:49.741 03:32:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:49.741 03:32:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.741 03:32:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.741 03:32:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.741 03:32:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.741 03:32:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.741 03:32:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.741 03:32:27 -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.741 03:32:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.741 03:32:27 -- nvmf/common.sh@296 -- # e810=() 00:18:49.741 03:32:27 -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.741 03:32:27 -- nvmf/common.sh@297 -- # x722=() 00:18:49.741 03:32:27 -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.741 03:32:27 -- nvmf/common.sh@298 -- # mlx=() 00:18:49.741 03:32:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.741 03:32:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.741 03:32:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.741 03:32:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.741 03:32:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.741 03:32:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.741 03:32:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.741 03:32:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.741 03:32:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.741 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.741 03:32:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.741 03:32:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.741 03:32:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.741 03:32:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.741 03:32:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:49.741 03:32:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.741 03:32:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.741 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.742 03:32:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.742 03:32:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.742 03:32:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.742 03:32:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:49.742 03:32:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.742 03:32:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.742 03:32:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.742 03:32:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:49.742 03:32:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:49.742 03:32:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:49.742 03:32:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:49.742 03:32:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:49.742 03:32:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.742 03:32:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.742 03:32:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.742 03:32:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:49.742 03:32:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.742 03:32:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.742 03:32:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:49.742 03:32:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.742 03:32:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.742 03:32:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:49.742 03:32:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:49.742 03:32:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.742 03:32:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.001 03:32:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.001 03:32:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.001 03:32:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.001 03:32:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.001 03:32:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.001 03:32:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.001 03:32:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:18:50.001 00:18:50.001 --- 10.0.0.2 ping statistics --- 00:18:50.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.001 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:50.001 03:32:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:18:50.001 00:18:50.001 --- 10.0.0.1 ping statistics --- 00:18:50.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.001 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:50.001 03:32:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.001 03:32:27 -- nvmf/common.sh@411 -- # return 0 00:18:50.001 03:32:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:50.001 03:32:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.001 03:32:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:50.001 03:32:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:50.001 03:32:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.001 03:32:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:50.001 03:32:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:50.001 03:32:27 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:18:50.001 03:32:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:50.001 03:32:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:50.001 03:32:27 -- common/autotest_common.sh@10 -- # set +x 00:18:50.001 03:32:27 -- nvmf/common.sh@470 -- # nvmfpid=309178 00:18:50.001 03:32:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:50.001 03:32:27 -- nvmf/common.sh@471 -- # waitforlisten 309178 00:18:50.001 03:32:27 -- common/autotest_common.sh@817 -- # '[' -z 309178 ']' 00:18:50.001 03:32:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.001 03:32:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.001 03:32:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.001 03:32:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.001 03:32:27 -- common/autotest_common.sh@10 -- # set +x 00:18:50.259 03:32:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:50.259 03:32:27 -- common/autotest_common.sh@850 -- # return 0 00:18:50.259 03:32:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:50.260 03:32:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:50.260 03:32:27 -- common/autotest_common.sh@10 -- # set +x 00:18:50.260 03:32:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.260 03:32:27 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:50.260 03:32:27 -- host/auth.sh@81 -- # gen_key null 32 00:18:50.260 03:32:27 -- host/auth.sh@53 -- # local digest len file key 00:18:50.260 03:32:27 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.260 03:32:27 -- host/auth.sh@54 -- # local -A digests 00:18:50.260 03:32:27 -- host/auth.sh@56 -- # digest=null 00:18:50.260 03:32:27 -- host/auth.sh@56 -- # len=32 00:18:50.260 03:32:27 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:50.260 03:32:27 -- host/auth.sh@57 -- # key=361c099eaea0e01f5c311c6b7128a822 00:18:50.260 03:32:27 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:50.260 03:32:27 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.DQx 00:18:50.260 03:32:27 -- host/auth.sh@59 -- # format_dhchap_key 361c099eaea0e01f5c311c6b7128a822 0 00:18:50.260 03:32:27 -- nvmf/common.sh@708 -- # format_key DHHC-1 361c099eaea0e01f5c311c6b7128a822 0 00:18:50.260 03:32:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:50.260 03:32:27 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:50.260 03:32:27 -- nvmf/common.sh@693 -- # key=361c099eaea0e01f5c311c6b7128a822 00:18:50.260 03:32:27 -- nvmf/common.sh@693 -- # digest=0 00:18:50.260 03:32:27 -- nvmf/common.sh@694 -- # python - 00:18:50.519 03:32:27 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.DQx 00:18:50.519 03:32:27 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.DQx 00:18:50.519 03:32:27 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.DQx 00:18:50.519 03:32:27 -- host/auth.sh@82 -- # gen_key null 48 00:18:50.519 03:32:27 -- host/auth.sh@53 -- # local digest len file key 00:18:50.519 03:32:27 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.519 03:32:27 -- host/auth.sh@54 -- # local -A digests 00:18:50.519 03:32:27 -- host/auth.sh@56 -- # digest=null 00:18:50.519 03:32:27 -- host/auth.sh@56 -- # len=48 00:18:50.519 03:32:27 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.519 03:32:27 -- host/auth.sh@57 -- # key=bcfb4c0f242b446c83601812cd55a63a13bfffc35ddaaa83 00:18:50.519 03:32:27 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:50.519 03:32:27 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.ps2 00:18:50.519 03:32:27 -- host/auth.sh@59 -- # format_dhchap_key bcfb4c0f242b446c83601812cd55a63a13bfffc35ddaaa83 0 00:18:50.519 03:32:27 -- nvmf/common.sh@708 -- # format_key DHHC-1 bcfb4c0f242b446c83601812cd55a63a13bfffc35ddaaa83 0 00:18:50.519 03:32:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # key=bcfb4c0f242b446c83601812cd55a63a13bfffc35ddaaa83 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # digest=0 00:18:50.519 03:32:27 -- nvmf/common.sh@694 -- # python - 00:18:50.519 03:32:27 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.ps2 00:18:50.519 03:32:27 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.ps2 00:18:50.519 03:32:27 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.ps2 00:18:50.519 03:32:27 -- host/auth.sh@83 -- # gen_key sha256 32 00:18:50.519 03:32:27 -- host/auth.sh@53 -- # local digest len file key 00:18:50.519 03:32:27 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.519 03:32:27 -- host/auth.sh@54 -- # local -A digests 00:18:50.519 03:32:27 -- host/auth.sh@56 -- # digest=sha256 00:18:50.519 03:32:27 -- host/auth.sh@56 -- # len=32 00:18:50.519 03:32:27 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:50.519 03:32:27 -- host/auth.sh@57 -- # key=b91e0786722d3612dc59ffe3891cabeb 00:18:50.519 03:32:27 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:18:50.519 03:32:27 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Sz7 00:18:50.519 03:32:27 -- host/auth.sh@59 -- # format_dhchap_key b91e0786722d3612dc59ffe3891cabeb 1 00:18:50.519 03:32:27 -- nvmf/common.sh@708 -- # format_key DHHC-1 b91e0786722d3612dc59ffe3891cabeb 1 00:18:50.519 03:32:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # key=b91e0786722d3612dc59ffe3891cabeb 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # digest=1 00:18:50.519 03:32:27 -- nvmf/common.sh@694 -- # python - 00:18:50.519 03:32:27 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Sz7 00:18:50.519 03:32:27 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Sz7 00:18:50.519 03:32:27 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Sz7 00:18:50.519 03:32:27 -- host/auth.sh@84 -- # gen_key sha384 48 00:18:50.519 03:32:27 -- host/auth.sh@53 -- # local digest len file key 00:18:50.519 03:32:27 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.519 03:32:27 -- host/auth.sh@54 -- # local -A digests 00:18:50.519 03:32:27 -- host/auth.sh@56 -- # digest=sha384 00:18:50.519 03:32:27 -- host/auth.sh@56 -- # len=48 00:18:50.519 03:32:27 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.519 03:32:27 -- host/auth.sh@57 -- # key=4918df7dbdc97274449780c2074160df09640a55f81f558e 00:18:50.519 03:32:27 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:18:50.519 03:32:27 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Mlm 00:18:50.519 03:32:27 -- host/auth.sh@59 -- # format_dhchap_key 4918df7dbdc97274449780c2074160df09640a55f81f558e 2 00:18:50.519 03:32:27 -- nvmf/common.sh@708 -- # format_key DHHC-1 4918df7dbdc97274449780c2074160df09640a55f81f558e 2 00:18:50.519 03:32:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # key=4918df7dbdc97274449780c2074160df09640a55f81f558e 00:18:50.519 03:32:27 -- nvmf/common.sh@693 -- # digest=2 00:18:50.519 03:32:27 -- nvmf/common.sh@694 -- # python - 00:18:50.519 03:32:28 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Mlm 00:18:50.519 03:32:28 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Mlm 00:18:50.519 03:32:28 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Mlm 00:18:50.519 03:32:28 -- host/auth.sh@85 -- # gen_key sha512 64 00:18:50.519 03:32:28 -- host/auth.sh@53 -- # local digest len file key 00:18:50.519 03:32:28 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.519 03:32:28 -- host/auth.sh@54 -- # local -A digests 00:18:50.519 03:32:28 -- host/auth.sh@56 -- # digest=sha512 00:18:50.519 03:32:28 -- host/auth.sh@56 -- # len=64 00:18:50.519 03:32:28 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.519 03:32:28 -- host/auth.sh@57 -- # key=ad9f3aff7e8e524627041934ccfc579e0af5e2308f0cff2edd61b0cfd992e372 00:18:50.519 03:32:28 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.519 03:32:28 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.4fl 00:18:50.519 03:32:28 -- host/auth.sh@59 -- # format_dhchap_key ad9f3aff7e8e524627041934ccfc579e0af5e2308f0cff2edd61b0cfd992e372 3 00:18:50.519 03:32:28 -- nvmf/common.sh@708 -- # format_key DHHC-1 ad9f3aff7e8e524627041934ccfc579e0af5e2308f0cff2edd61b0cfd992e372 3 00:18:50.519 03:32:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:50.519 03:32:28 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:50.519 03:32:28 -- nvmf/common.sh@693 -- # key=ad9f3aff7e8e524627041934ccfc579e0af5e2308f0cff2edd61b0cfd992e372 00:18:50.519 03:32:28 -- nvmf/common.sh@693 -- # digest=3 00:18:50.519 03:32:28 -- nvmf/common.sh@694 -- # python - 00:18:50.519 03:32:28 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.4fl 00:18:50.519 03:32:28 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.4fl 00:18:50.519 03:32:28 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.4fl 00:18:50.519 03:32:28 -- host/auth.sh@87 -- # waitforlisten 309178 00:18:50.519 03:32:28 -- common/autotest_common.sh@817 -- # '[' -z 309178 ']' 00:18:50.519 03:32:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.519 03:32:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.519 03:32:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.519 03:32:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.519 03:32:28 -- common/autotest_common.sh@10 -- # set +x 00:18:50.778 03:32:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:50.778 03:32:28 -- common/autotest_common.sh@850 -- # return 0 00:18:50.779 03:32:28 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:50.779 03:32:28 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DQx 00:18:50.779 03:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.779 03:32:28 -- common/autotest_common.sh@10 -- # set +x 00:18:50.779 03:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.779 03:32:28 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:50.779 03:32:28 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ps2 00:18:50.779 03:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.779 03:32:28 -- common/autotest_common.sh@10 -- # set +x 00:18:50.779 03:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.779 03:32:28 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:50.779 03:32:28 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Sz7 00:18:50.779 03:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.779 03:32:28 -- common/autotest_common.sh@10 -- # set +x 00:18:50.779 03:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.779 03:32:28 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:50.779 03:32:28 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Mlm 00:18:50.779 03:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.779 03:32:28 -- common/autotest_common.sh@10 -- # set +x 00:18:51.037 03:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.037 03:32:28 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:51.037 03:32:28 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4fl 00:18:51.037 03:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.037 03:32:28 -- common/autotest_common.sh@10 -- # set +x 00:18:51.037 03:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.037 03:32:28 -- host/auth.sh@92 -- # nvmet_auth_init 00:18:51.037 03:32:28 -- host/auth.sh@35 -- # get_main_ns_ip 00:18:51.037 03:32:28 -- nvmf/common.sh@717 -- # local ip 00:18:51.037 03:32:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:51.037 03:32:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:51.037 03:32:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.037 03:32:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.037 03:32:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:51.037 03:32:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.037 03:32:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:51.037 03:32:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:51.037 03:32:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:51.037 03:32:28 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:51.037 03:32:28 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:51.037 03:32:28 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:51.037 03:32:28 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.037 03:32:28 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.037 03:32:28 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:51.037 03:32:28 -- nvmf/common.sh@628 -- # local block nvme 00:18:51.037 03:32:28 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:51.037 03:32:28 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:51.037 03:32:28 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:51.037 03:32:28 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:51.974 Waiting for block devices as requested 00:18:51.974 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:51.975 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:52.234 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:52.234 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:52.234 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:52.493 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:52.493 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:52.493 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:52.493 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:52.751 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:52.751 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:52.751 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:52.751 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:52.751 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:53.009 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:53.009 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:53.009 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:53.577 03:32:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:53.577 03:32:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:53.577 03:32:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:53.577 03:32:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:53.577 03:32:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:53.577 03:32:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:53.577 03:32:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:53.577 03:32:30 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:53.577 03:32:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:53.577 No valid GPT data, bailing 00:18:53.577 03:32:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:53.577 03:32:30 -- scripts/common.sh@391 -- # pt= 00:18:53.577 03:32:30 -- scripts/common.sh@392 -- # return 1 00:18:53.577 03:32:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:53.577 03:32:30 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:53.577 03:32:30 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:53.577 03:32:30 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:53.577 03:32:30 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:53.577 03:32:30 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:53.577 03:32:30 -- nvmf/common.sh@656 -- # echo 1 00:18:53.577 03:32:30 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:53.577 03:32:30 -- nvmf/common.sh@658 -- # echo 1 00:18:53.577 03:32:30 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:53.577 03:32:30 -- nvmf/common.sh@661 -- # echo tcp 00:18:53.577 03:32:30 -- nvmf/common.sh@662 -- # echo 4420 00:18:53.577 03:32:30 -- nvmf/common.sh@663 -- # echo ipv4 00:18:53.577 03:32:30 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:53.577 03:32:30 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:18:53.577 00:18:53.577 Discovery Log Number of Records 2, Generation counter 2 00:18:53.577 =====Discovery Log Entry 0====== 00:18:53.577 trtype: tcp 00:18:53.577 adrfam: ipv4 00:18:53.577 subtype: current discovery subsystem 00:18:53.577 treq: not specified, sq flow control disable supported 00:18:53.577 portid: 1 00:18:53.577 trsvcid: 4420 00:18:53.577 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:53.577 traddr: 10.0.0.1 00:18:53.577 eflags: none 00:18:53.577 sectype: none 00:18:53.577 =====Discovery Log Entry 1====== 00:18:53.577 trtype: tcp 00:18:53.577 adrfam: ipv4 00:18:53.577 subtype: nvme subsystem 00:18:53.577 treq: not specified, sq flow control disable supported 00:18:53.577 portid: 1 00:18:53.577 trsvcid: 4420 00:18:53.577 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:53.577 traddr: 10.0.0.1 00:18:53.577 eflags: none 00:18:53.577 sectype: none 00:18:53.577 03:32:30 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:53.577 03:32:30 -- host/auth.sh@37 -- # echo 0 00:18:53.577 03:32:30 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:53.577 03:32:30 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:53.577 03:32:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.578 03:32:30 -- host/auth.sh@44 -- # digest=sha256 00:18:53.578 03:32:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:53.578 03:32:30 -- host/auth.sh@44 -- # keyid=1 00:18:53.578 03:32:30 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:53.578 03:32:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.578 03:32:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:53.578 03:32:30 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:53.578 03:32:30 -- host/auth.sh@100 -- # IFS=, 00:18:53.578 03:32:30 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:18:53.578 03:32:30 -- host/auth.sh@100 -- # IFS=, 00:18:53.578 03:32:30 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:53.578 03:32:30 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:53.578 03:32:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.578 03:32:30 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:18:53.578 03:32:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:53.578 03:32:30 -- host/auth.sh@68 -- # keyid=1 00:18:53.578 03:32:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:53.578 03:32:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.578 03:32:30 -- common/autotest_common.sh@10 -- # set +x 00:18:53.578 03:32:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.578 03:32:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.578 03:32:30 -- nvmf/common.sh@717 -- # local ip 00:18:53.578 03:32:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.578 03:32:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.578 03:32:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.578 03:32:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.578 03:32:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:53.578 03:32:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.578 03:32:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:53.578 03:32:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:53.578 03:32:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:53.578 03:32:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:53.578 03:32:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.578 03:32:30 -- common/autotest_common.sh@10 -- # set +x 00:18:53.578 nvme0n1 00:18:53.578 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.578 03:32:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.578 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.578 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.578 03:32:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:53.578 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.838 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:18:53.838 03:32:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.838 03:32:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:53.838 03:32:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:53.838 03:32:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.838 03:32:31 -- host/auth.sh@44 -- # digest=sha256 00:18:53.838 03:32:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:53.838 03:32:31 -- host/auth.sh@44 -- # keyid=0 00:18:53.838 03:32:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:53.838 03:32:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.838 03:32:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:53.838 03:32:31 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:53.838 03:32:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:18:53.838 03:32:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.838 03:32:31 -- host/auth.sh@68 -- # digest=sha256 00:18:53.838 03:32:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:53.838 03:32:31 -- host/auth.sh@68 -- # keyid=0 00:18:53.838 03:32:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.838 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.838 03:32:31 -- nvmf/common.sh@717 -- # local ip 00:18:53.838 03:32:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.838 03:32:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.838 03:32:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.838 03:32:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.838 03:32:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:53.838 03:32:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.838 03:32:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:53.838 03:32:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:53.838 03:32:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:53.838 03:32:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.838 nvme0n1 00:18:53.838 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.838 03:32:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:53.838 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.838 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:53.838 03:32:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:53.838 03:32:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.838 03:32:31 -- host/auth.sh@44 -- # digest=sha256 00:18:53.838 03:32:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:53.838 03:32:31 -- host/auth.sh@44 -- # keyid=1 00:18:53.838 03:32:31 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:53.838 03:32:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.838 03:32:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:53.838 03:32:31 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:53.838 03:32:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:18:53.838 03:32:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.838 03:32:31 -- host/auth.sh@68 -- # digest=sha256 00:18:53.838 03:32:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:53.838 03:32:31 -- host/auth.sh@68 -- # keyid=1 00:18:53.838 03:32:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:53.838 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.838 03:32:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.838 03:32:31 -- nvmf/common.sh@717 -- # local ip 00:18:53.838 03:32:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.838 03:32:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.838 03:32:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.838 03:32:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.838 03:32:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:53.838 03:32:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.838 03:32:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:53.838 03:32:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:53.838 03:32:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:53.838 03:32:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:53.838 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.838 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 nvme0n1 00:18:54.098 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.098 03:32:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.098 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.098 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 03:32:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.098 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.098 03:32:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.098 03:32:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.098 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.098 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.098 03:32:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.098 03:32:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:54.098 03:32:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.098 03:32:31 -- host/auth.sh@44 -- # digest=sha256 00:18:54.098 03:32:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.098 03:32:31 -- host/auth.sh@44 -- # keyid=2 00:18:54.098 03:32:31 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:54.098 03:32:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.098 03:32:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.098 03:32:31 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:54.098 03:32:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:18:54.098 03:32:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.098 03:32:31 -- host/auth.sh@68 -- # digest=sha256 00:18:54.098 03:32:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:54.098 03:32:31 -- host/auth.sh@68 -- # keyid=2 00:18:54.098 03:32:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.098 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.098 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.098 03:32:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.098 03:32:31 -- nvmf/common.sh@717 -- # local ip 00:18:54.098 03:32:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.098 03:32:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.098 03:32:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.098 03:32:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.098 03:32:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.098 03:32:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.098 03:32:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.098 03:32:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.098 03:32:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.098 03:32:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:54.098 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.098 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 nvme0n1 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.358 03:32:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.358 03:32:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:54.358 03:32:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.358 03:32:31 -- host/auth.sh@44 -- # digest=sha256 00:18:54.358 03:32:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.358 03:32:31 -- host/auth.sh@44 -- # keyid=3 00:18:54.358 03:32:31 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:54.358 03:32:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.358 03:32:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.358 03:32:31 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:54.358 03:32:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:18:54.358 03:32:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.358 03:32:31 -- host/auth.sh@68 -- # digest=sha256 00:18:54.358 03:32:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:54.358 03:32:31 -- host/auth.sh@68 -- # keyid=3 00:18:54.358 03:32:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.358 03:32:31 -- nvmf/common.sh@717 -- # local ip 00:18:54.358 03:32:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.358 03:32:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.358 03:32:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.358 03:32:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.358 03:32:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.358 03:32:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.358 03:32:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.358 03:32:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.358 03:32:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.358 03:32:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 nvme0n1 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.358 03:32:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.358 03:32:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.358 03:32:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:54.358 03:32:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.358 03:32:31 -- host/auth.sh@44 -- # digest=sha256 00:18:54.358 03:32:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.358 03:32:31 -- host/auth.sh@44 -- # keyid=4 00:18:54.358 03:32:31 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:54.358 03:32:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.358 03:32:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.358 03:32:31 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:54.358 03:32:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:18:54.358 03:32:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.358 03:32:31 -- host/auth.sh@68 -- # digest=sha256 00:18:54.358 03:32:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:54.358 03:32:31 -- host/auth.sh@68 -- # keyid=4 00:18:54.358 03:32:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.358 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.358 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.358 03:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.619 03:32:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.619 03:32:31 -- nvmf/common.sh@717 -- # local ip 00:18:54.619 03:32:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.619 03:32:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.619 03:32:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.619 03:32:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.619 03:32:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.619 03:32:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.619 03:32:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.619 03:32:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.619 03:32:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.619 03:32:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:54.619 03:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.619 03:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:54.619 nvme0n1 00:18:54.619 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.619 03:32:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.619 03:32:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.619 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.619 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.619 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.619 03:32:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.619 03:32:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.619 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.619 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.619 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.619 03:32:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.619 03:32:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.619 03:32:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:54.619 03:32:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.619 03:32:32 -- host/auth.sh@44 -- # digest=sha256 00:18:54.619 03:32:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:54.619 03:32:32 -- host/auth.sh@44 -- # keyid=0 00:18:54.619 03:32:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:54.619 03:32:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.619 03:32:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:54.619 03:32:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:54.619 03:32:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:18:54.619 03:32:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.619 03:32:32 -- host/auth.sh@68 -- # digest=sha256 00:18:54.619 03:32:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:54.619 03:32:32 -- host/auth.sh@68 -- # keyid=0 00:18:54.619 03:32:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.619 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.619 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.619 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.619 03:32:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.619 03:32:32 -- nvmf/common.sh@717 -- # local ip 00:18:54.619 03:32:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.619 03:32:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.619 03:32:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.619 03:32:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.619 03:32:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.619 03:32:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.619 03:32:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.619 03:32:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.619 03:32:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.619 03:32:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:54.619 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.619 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 nvme0n1 00:18:54.879 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.879 03:32:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.879 03:32:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.879 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.879 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.879 03:32:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.879 03:32:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.879 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.879 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.879 03:32:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.879 03:32:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:54.879 03:32:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.879 03:32:32 -- host/auth.sh@44 -- # digest=sha256 00:18:54.879 03:32:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:54.879 03:32:32 -- host/auth.sh@44 -- # keyid=1 00:18:54.879 03:32:32 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:54.879 03:32:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.879 03:32:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:54.879 03:32:32 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:54.879 03:32:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:18:54.879 03:32:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.879 03:32:32 -- host/auth.sh@68 -- # digest=sha256 00:18:54.879 03:32:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:54.879 03:32:32 -- host/auth.sh@68 -- # keyid=1 00:18:54.879 03:32:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.879 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.879 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.879 03:32:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.879 03:32:32 -- nvmf/common.sh@717 -- # local ip 00:18:54.879 03:32:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.879 03:32:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.879 03:32:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.879 03:32:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.879 03:32:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.879 03:32:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.879 03:32:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.879 03:32:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.879 03:32:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.879 03:32:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:54.879 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.879 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.138 nvme0n1 00:18:55.138 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.138 03:32:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.138 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.138 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.138 03:32:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.138 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.138 03:32:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.138 03:32:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.138 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.138 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.138 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.138 03:32:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.138 03:32:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:55.138 03:32:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.138 03:32:32 -- host/auth.sh@44 -- # digest=sha256 00:18:55.138 03:32:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:55.138 03:32:32 -- host/auth.sh@44 -- # keyid=2 00:18:55.138 03:32:32 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:55.138 03:32:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.138 03:32:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:55.138 03:32:32 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:55.138 03:32:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:18:55.138 03:32:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.138 03:32:32 -- host/auth.sh@68 -- # digest=sha256 00:18:55.138 03:32:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:55.138 03:32:32 -- host/auth.sh@68 -- # keyid=2 00:18:55.138 03:32:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.138 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.138 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.138 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.138 03:32:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.138 03:32:32 -- nvmf/common.sh@717 -- # local ip 00:18:55.138 03:32:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.138 03:32:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.138 03:32:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.138 03:32:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.138 03:32:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.138 03:32:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.138 03:32:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.138 03:32:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.138 03:32:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.139 03:32:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:55.139 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.139 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.139 nvme0n1 00:18:55.139 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.139 03:32:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.139 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.139 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.139 03:32:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.402 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.402 03:32:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.402 03:32:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.402 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.402 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.402 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.402 03:32:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.402 03:32:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:55.402 03:32:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.402 03:32:32 -- host/auth.sh@44 -- # digest=sha256 00:18:55.402 03:32:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:55.402 03:32:32 -- host/auth.sh@44 -- # keyid=3 00:18:55.402 03:32:32 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:55.402 03:32:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.402 03:32:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:55.402 03:32:32 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:55.402 03:32:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:18:55.402 03:32:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.402 03:32:32 -- host/auth.sh@68 -- # digest=sha256 00:18:55.402 03:32:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:55.402 03:32:32 -- host/auth.sh@68 -- # keyid=3 00:18:55.402 03:32:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.402 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.402 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.402 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.402 03:32:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.402 03:32:32 -- nvmf/common.sh@717 -- # local ip 00:18:55.402 03:32:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.402 03:32:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.402 03:32:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.402 03:32:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.402 03:32:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.402 03:32:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.402 03:32:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.402 03:32:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.402 03:32:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.403 03:32:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:55.403 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.403 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.403 nvme0n1 00:18:55.403 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.403 03:32:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.403 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.403 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.403 03:32:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.403 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.403 03:32:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.403 03:32:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.403 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.403 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.699 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.699 03:32:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.699 03:32:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:55.699 03:32:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.699 03:32:32 -- host/auth.sh@44 -- # digest=sha256 00:18:55.699 03:32:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:55.699 03:32:32 -- host/auth.sh@44 -- # keyid=4 00:18:55.699 03:32:32 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:55.699 03:32:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.699 03:32:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:55.699 03:32:32 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:55.699 03:32:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:18:55.699 03:32:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.699 03:32:32 -- host/auth.sh@68 -- # digest=sha256 00:18:55.699 03:32:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:55.699 03:32:32 -- host/auth.sh@68 -- # keyid=4 00:18:55.699 03:32:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.699 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.699 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.699 03:32:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.699 03:32:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.699 03:32:32 -- nvmf/common.sh@717 -- # local ip 00:18:55.699 03:32:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.699 03:32:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.699 03:32:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.699 03:32:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.699 03:32:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.699 03:32:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.699 03:32:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.699 03:32:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.699 03:32:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.699 03:32:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.700 03:32:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.700 03:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:55.700 nvme0n1 00:18:55.700 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.700 03:32:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.700 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.700 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.700 03:32:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.700 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.700 03:32:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.700 03:32:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.700 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.700 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.700 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.700 03:32:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.700 03:32:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.700 03:32:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:55.700 03:32:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.700 03:32:33 -- host/auth.sh@44 -- # digest=sha256 00:18:55.700 03:32:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.700 03:32:33 -- host/auth.sh@44 -- # keyid=0 00:18:55.700 03:32:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:55.700 03:32:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.700 03:32:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:55.700 03:32:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:55.700 03:32:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:18:55.700 03:32:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.700 03:32:33 -- host/auth.sh@68 -- # digest=sha256 00:18:55.700 03:32:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:55.700 03:32:33 -- host/auth.sh@68 -- # keyid=0 00:18:55.700 03:32:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.700 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.700 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.700 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.700 03:32:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.700 03:32:33 -- nvmf/common.sh@717 -- # local ip 00:18:55.700 03:32:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.700 03:32:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.700 03:32:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.700 03:32:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.700 03:32:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.700 03:32:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.700 03:32:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.700 03:32:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.700 03:32:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.700 03:32:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:55.700 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.700 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.959 nvme0n1 00:18:55.959 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.959 03:32:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.959 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.959 03:32:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.959 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.959 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.959 03:32:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.959 03:32:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.959 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.959 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.959 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.959 03:32:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.959 03:32:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:55.959 03:32:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.959 03:32:33 -- host/auth.sh@44 -- # digest=sha256 00:18:55.959 03:32:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.959 03:32:33 -- host/auth.sh@44 -- # keyid=1 00:18:55.959 03:32:33 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:55.959 03:32:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.959 03:32:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:55.959 03:32:33 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:55.959 03:32:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:18:55.959 03:32:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.959 03:32:33 -- host/auth.sh@68 -- # digest=sha256 00:18:55.959 03:32:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:55.959 03:32:33 -- host/auth.sh@68 -- # keyid=1 00:18:55.959 03:32:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.959 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.959 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.959 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.218 03:32:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.218 03:32:33 -- nvmf/common.sh@717 -- # local ip 00:18:56.218 03:32:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.218 03:32:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.218 03:32:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.218 03:32:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.218 03:32:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.218 03:32:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.218 03:32:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.218 03:32:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.218 03:32:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.218 03:32:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:56.218 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.218 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:56.218 nvme0n1 00:18:56.218 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.218 03:32:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.218 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.218 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:56.218 03:32:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.477 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.477 03:32:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.477 03:32:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.477 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.477 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:56.477 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.477 03:32:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.477 03:32:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:56.477 03:32:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.477 03:32:33 -- host/auth.sh@44 -- # digest=sha256 00:18:56.477 03:32:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:56.477 03:32:33 -- host/auth.sh@44 -- # keyid=2 00:18:56.477 03:32:33 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:56.477 03:32:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.477 03:32:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:56.477 03:32:33 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:56.477 03:32:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:18:56.477 03:32:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.477 03:32:33 -- host/auth.sh@68 -- # digest=sha256 00:18:56.477 03:32:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:56.477 03:32:33 -- host/auth.sh@68 -- # keyid=2 00:18:56.477 03:32:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:56.477 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.477 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:56.477 03:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.477 03:32:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.477 03:32:33 -- nvmf/common.sh@717 -- # local ip 00:18:56.477 03:32:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.477 03:32:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.477 03:32:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.477 03:32:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.477 03:32:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.477 03:32:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.477 03:32:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.477 03:32:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.477 03:32:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.477 03:32:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:56.477 03:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.477 03:32:33 -- common/autotest_common.sh@10 -- # set +x 00:18:56.737 nvme0n1 00:18:56.737 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.737 03:32:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.737 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.737 03:32:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.737 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.737 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.737 03:32:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.737 03:32:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.737 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.737 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.737 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.737 03:32:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.737 03:32:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:56.737 03:32:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.737 03:32:34 -- host/auth.sh@44 -- # digest=sha256 00:18:56.737 03:32:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:56.737 03:32:34 -- host/auth.sh@44 -- # keyid=3 00:18:56.737 03:32:34 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:56.737 03:32:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.737 03:32:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:56.737 03:32:34 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:56.737 03:32:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:18:56.737 03:32:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.737 03:32:34 -- host/auth.sh@68 -- # digest=sha256 00:18:56.737 03:32:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:56.737 03:32:34 -- host/auth.sh@68 -- # keyid=3 00:18:56.737 03:32:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:56.737 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.737 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.737 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.737 03:32:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.737 03:32:34 -- nvmf/common.sh@717 -- # local ip 00:18:56.737 03:32:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.737 03:32:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.737 03:32:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.737 03:32:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.737 03:32:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.737 03:32:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.737 03:32:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.737 03:32:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.737 03:32:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.737 03:32:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:56.737 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.737 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.998 nvme0n1 00:18:56.998 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.998 03:32:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.998 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.998 03:32:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.998 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.998 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.998 03:32:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.998 03:32:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.998 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.998 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.998 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.998 03:32:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.998 03:32:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:56.998 03:32:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.998 03:32:34 -- host/auth.sh@44 -- # digest=sha256 00:18:56.998 03:32:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:56.998 03:32:34 -- host/auth.sh@44 -- # keyid=4 00:18:56.998 03:32:34 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:56.998 03:32:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.998 03:32:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:56.998 03:32:34 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:56.998 03:32:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:18:56.998 03:32:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.998 03:32:34 -- host/auth.sh@68 -- # digest=sha256 00:18:56.998 03:32:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:56.998 03:32:34 -- host/auth.sh@68 -- # keyid=4 00:18:56.998 03:32:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:56.998 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.998 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:56.998 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.998 03:32:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.998 03:32:34 -- nvmf/common.sh@717 -- # local ip 00:18:56.998 03:32:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.998 03:32:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.998 03:32:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.998 03:32:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.998 03:32:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.998 03:32:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.998 03:32:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.998 03:32:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.998 03:32:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.998 03:32:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:56.998 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.998 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:57.258 nvme0n1 00:18:57.258 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.258 03:32:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.258 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.258 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:57.258 03:32:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.258 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.258 03:32:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.258 03:32:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.258 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.258 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:57.258 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.258 03:32:34 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.258 03:32:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.258 03:32:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:57.258 03:32:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.258 03:32:34 -- host/auth.sh@44 -- # digest=sha256 00:18:57.258 03:32:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:57.258 03:32:34 -- host/auth.sh@44 -- # keyid=0 00:18:57.258 03:32:34 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:57.258 03:32:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.258 03:32:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:57.258 03:32:34 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:18:57.258 03:32:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:18:57.258 03:32:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.258 03:32:34 -- host/auth.sh@68 -- # digest=sha256 00:18:57.258 03:32:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:57.258 03:32:34 -- host/auth.sh@68 -- # keyid=0 00:18:57.258 03:32:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.258 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.258 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:57.258 03:32:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.258 03:32:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.258 03:32:34 -- nvmf/common.sh@717 -- # local ip 00:18:57.258 03:32:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.258 03:32:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.258 03:32:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.258 03:32:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.258 03:32:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.259 03:32:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.259 03:32:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.259 03:32:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.259 03:32:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.259 03:32:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:57.259 03:32:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.259 03:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:57.827 nvme0n1 00:18:57.827 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.827 03:32:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.827 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.827 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:57.827 03:32:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.827 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.827 03:32:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.827 03:32:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.827 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.827 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:57.827 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.827 03:32:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.827 03:32:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:57.827 03:32:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.827 03:32:35 -- host/auth.sh@44 -- # digest=sha256 00:18:57.827 03:32:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:57.827 03:32:35 -- host/auth.sh@44 -- # keyid=1 00:18:57.827 03:32:35 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:57.827 03:32:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.827 03:32:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:57.827 03:32:35 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:18:57.827 03:32:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:18:57.827 03:32:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.827 03:32:35 -- host/auth.sh@68 -- # digest=sha256 00:18:57.827 03:32:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:57.827 03:32:35 -- host/auth.sh@68 -- # keyid=1 00:18:57.827 03:32:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.827 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.827 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:57.827 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.827 03:32:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.827 03:32:35 -- nvmf/common.sh@717 -- # local ip 00:18:57.827 03:32:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.827 03:32:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.827 03:32:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.827 03:32:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.827 03:32:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.827 03:32:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.827 03:32:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.827 03:32:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.827 03:32:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.827 03:32:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:57.827 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.827 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 nvme0n1 00:18:58.396 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.396 03:32:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.396 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.396 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 03:32:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.396 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.396 03:32:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.396 03:32:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.396 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.396 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.396 03:32:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.396 03:32:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:58.396 03:32:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.396 03:32:35 -- host/auth.sh@44 -- # digest=sha256 00:18:58.396 03:32:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.396 03:32:35 -- host/auth.sh@44 -- # keyid=2 00:18:58.396 03:32:35 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:58.396 03:32:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:58.396 03:32:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:58.396 03:32:35 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:18:58.396 03:32:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:18:58.396 03:32:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.396 03:32:35 -- host/auth.sh@68 -- # digest=sha256 00:18:58.396 03:32:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:58.396 03:32:35 -- host/auth.sh@68 -- # keyid=2 00:18:58.396 03:32:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.396 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.396 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 03:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.396 03:32:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:58.396 03:32:35 -- nvmf/common.sh@717 -- # local ip 00:18:58.396 03:32:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:58.396 03:32:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:58.396 03:32:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.396 03:32:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.396 03:32:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:58.396 03:32:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.396 03:32:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:58.396 03:32:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:58.396 03:32:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:58.396 03:32:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:58.396 03:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.396 03:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 nvme0n1 00:18:58.966 03:32:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.966 03:32:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.966 03:32:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.966 03:32:36 -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 03:32:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.966 03:32:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.966 03:32:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.966 03:32:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.966 03:32:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.966 03:32:36 -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 03:32:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.966 03:32:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.966 03:32:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:58.966 03:32:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.966 03:32:36 -- host/auth.sh@44 -- # digest=sha256 00:18:58.966 03:32:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.966 03:32:36 -- host/auth.sh@44 -- # keyid=3 00:18:58.966 03:32:36 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:58.966 03:32:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:58.966 03:32:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:58.966 03:32:36 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:18:58.966 03:32:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:18:58.966 03:32:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.966 03:32:36 -- host/auth.sh@68 -- # digest=sha256 00:18:58.966 03:32:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:58.966 03:32:36 -- host/auth.sh@68 -- # keyid=3 00:18:58.966 03:32:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.966 03:32:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.966 03:32:36 -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 03:32:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.226 03:32:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.226 03:32:36 -- nvmf/common.sh@717 -- # local ip 00:18:59.226 03:32:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.226 03:32:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.226 03:32:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.226 03:32:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.226 03:32:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:59.226 03:32:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.226 03:32:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:59.226 03:32:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:59.226 03:32:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:59.226 03:32:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:59.226 03:32:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.226 03:32:36 -- common/autotest_common.sh@10 -- # set +x 00:18:59.795 nvme0n1 00:18:59.795 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.795 03:32:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.795 03:32:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.795 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.795 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:18:59.795 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.795 03:32:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.795 03:32:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.795 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.795 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:18:59.795 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.795 03:32:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.795 03:32:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:59.795 03:32:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.795 03:32:37 -- host/auth.sh@44 -- # digest=sha256 00:18:59.795 03:32:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.795 03:32:37 -- host/auth.sh@44 -- # keyid=4 00:18:59.795 03:32:37 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:59.795 03:32:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:59.795 03:32:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:59.795 03:32:37 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:18:59.795 03:32:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:18:59.795 03:32:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.795 03:32:37 -- host/auth.sh@68 -- # digest=sha256 00:18:59.795 03:32:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:59.795 03:32:37 -- host/auth.sh@68 -- # keyid=4 00:18:59.795 03:32:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.795 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.795 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:18:59.795 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.795 03:32:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.795 03:32:37 -- nvmf/common.sh@717 -- # local ip 00:18:59.795 03:32:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.795 03:32:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.795 03:32:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.795 03:32:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.795 03:32:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:59.795 03:32:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.795 03:32:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:59.795 03:32:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:59.795 03:32:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:59.795 03:32:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.795 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.795 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:00.364 nvme0n1 00:19:00.364 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.364 03:32:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.364 03:32:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:00.364 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.364 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:00.364 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.364 03:32:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.364 03:32:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.364 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.364 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:00.364 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.364 03:32:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.364 03:32:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.364 03:32:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:00.364 03:32:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.364 03:32:37 -- host/auth.sh@44 -- # digest=sha256 00:19:00.364 03:32:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:00.364 03:32:37 -- host/auth.sh@44 -- # keyid=0 00:19:00.364 03:32:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:00.364 03:32:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:00.365 03:32:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:00.365 03:32:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:00.365 03:32:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:19:00.365 03:32:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.365 03:32:37 -- host/auth.sh@68 -- # digest=sha256 00:19:00.365 03:32:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:00.365 03:32:37 -- host/auth.sh@68 -- # keyid=0 00:19:00.365 03:32:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.365 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.365 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:00.365 03:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.365 03:32:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.365 03:32:37 -- nvmf/common.sh@717 -- # local ip 00:19:00.365 03:32:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.365 03:32:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.365 03:32:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.365 03:32:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.365 03:32:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:00.365 03:32:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.365 03:32:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:00.365 03:32:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:00.365 03:32:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:00.365 03:32:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:00.365 03:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.365 03:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 nvme0n1 00:19:01.303 03:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.303 03:32:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.303 03:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.303 03:32:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 03:32:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:01.303 03:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.303 03:32:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.303 03:32:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.303 03:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.303 03:32:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 03:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.303 03:32:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:01.303 03:32:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:01.303 03:32:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:01.303 03:32:38 -- host/auth.sh@44 -- # digest=sha256 00:19:01.303 03:32:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:01.303 03:32:38 -- host/auth.sh@44 -- # keyid=1 00:19:01.303 03:32:38 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:01.303 03:32:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:01.303 03:32:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:01.303 03:32:38 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:01.303 03:32:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:01.303 03:32:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:01.303 03:32:38 -- host/auth.sh@68 -- # digest=sha256 00:19:01.303 03:32:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:01.303 03:32:38 -- host/auth.sh@68 -- # keyid=1 00:19:01.303 03:32:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.303 03:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.303 03:32:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 03:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.303 03:32:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:01.303 03:32:38 -- nvmf/common.sh@717 -- # local ip 00:19:01.303 03:32:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:01.303 03:32:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:01.303 03:32:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.303 03:32:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.303 03:32:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:01.303 03:32:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.303 03:32:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:01.303 03:32:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:01.303 03:32:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:01.303 03:32:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:01.303 03:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.303 03:32:38 -- common/autotest_common.sh@10 -- # set +x 00:19:02.240 nvme0n1 00:19:02.240 03:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.240 03:32:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.240 03:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.240 03:32:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.240 03:32:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:02.240 03:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.240 03:32:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.240 03:32:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.240 03:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.240 03:32:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.240 03:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.240 03:32:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:02.240 03:32:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:02.240 03:32:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:02.240 03:32:39 -- host/auth.sh@44 -- # digest=sha256 00:19:02.240 03:32:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.240 03:32:39 -- host/auth.sh@44 -- # keyid=2 00:19:02.240 03:32:39 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:02.240 03:32:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:02.240 03:32:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:02.240 03:32:39 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:02.240 03:32:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:02.240 03:32:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:02.240 03:32:39 -- host/auth.sh@68 -- # digest=sha256 00:19:02.240 03:32:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:02.240 03:32:39 -- host/auth.sh@68 -- # keyid=2 00:19:02.241 03:32:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.241 03:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.241 03:32:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.241 03:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.241 03:32:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:02.241 03:32:39 -- nvmf/common.sh@717 -- # local ip 00:19:02.241 03:32:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:02.241 03:32:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:02.241 03:32:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.241 03:32:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.241 03:32:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:02.241 03:32:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.241 03:32:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:02.241 03:32:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:02.241 03:32:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:02.241 03:32:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:02.241 03:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.241 03:32:39 -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 nvme0n1 00:19:03.181 03:32:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.181 03:32:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.181 03:32:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:03.181 03:32:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.181 03:32:40 -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 03:32:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.181 03:32:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.181 03:32:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.181 03:32:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.181 03:32:40 -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 03:32:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.181 03:32:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:03.181 03:32:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:03.181 03:32:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:03.181 03:32:40 -- host/auth.sh@44 -- # digest=sha256 00:19:03.181 03:32:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.181 03:32:40 -- host/auth.sh@44 -- # keyid=3 00:19:03.181 03:32:40 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:03.181 03:32:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:03.181 03:32:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:03.181 03:32:40 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:03.181 03:32:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:03.181 03:32:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:03.181 03:32:40 -- host/auth.sh@68 -- # digest=sha256 00:19:03.181 03:32:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:03.181 03:32:40 -- host/auth.sh@68 -- # keyid=3 00:19:03.181 03:32:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.181 03:32:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.181 03:32:40 -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 03:32:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.181 03:32:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:03.181 03:32:40 -- nvmf/common.sh@717 -- # local ip 00:19:03.181 03:32:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:03.181 03:32:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:03.181 03:32:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.181 03:32:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.181 03:32:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:03.181 03:32:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.181 03:32:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:03.181 03:32:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:03.181 03:32:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:03.181 03:32:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:03.181 03:32:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.181 03:32:40 -- common/autotest_common.sh@10 -- # set +x 00:19:04.119 nvme0n1 00:19:04.119 03:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.119 03:32:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.119 03:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.119 03:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:04.119 03:32:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:04.119 03:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.119 03:32:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.119 03:32:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.119 03:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.119 03:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:04.379 03:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.379 03:32:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:04.379 03:32:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:04.379 03:32:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:04.379 03:32:41 -- host/auth.sh@44 -- # digest=sha256 00:19:04.379 03:32:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.379 03:32:41 -- host/auth.sh@44 -- # keyid=4 00:19:04.379 03:32:41 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:04.379 03:32:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:04.379 03:32:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:04.379 03:32:41 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:04.379 03:32:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:04.379 03:32:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:04.379 03:32:41 -- host/auth.sh@68 -- # digest=sha256 00:19:04.379 03:32:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:04.379 03:32:41 -- host/auth.sh@68 -- # keyid=4 00:19:04.379 03:32:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.379 03:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.379 03:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:04.379 03:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.379 03:32:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:04.379 03:32:41 -- nvmf/common.sh@717 -- # local ip 00:19:04.379 03:32:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:04.379 03:32:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:04.379 03:32:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.379 03:32:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.379 03:32:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:04.379 03:32:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.379 03:32:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:04.379 03:32:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:04.379 03:32:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:04.379 03:32:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:04.379 03:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.379 03:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:05.316 nvme0n1 00:19:05.316 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.317 03:32:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.317 03:32:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.317 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.317 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.317 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.317 03:32:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.317 03:32:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.317 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.317 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.317 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.317 03:32:42 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:05.317 03:32:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.317 03:32:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.317 03:32:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:05.317 03:32:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.317 03:32:42 -- host/auth.sh@44 -- # digest=sha384 00:19:05.317 03:32:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.317 03:32:42 -- host/auth.sh@44 -- # keyid=0 00:19:05.317 03:32:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:05.317 03:32:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.317 03:32:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.317 03:32:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:05.317 03:32:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:05.317 03:32:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.317 03:32:42 -- host/auth.sh@68 -- # digest=sha384 00:19:05.317 03:32:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.317 03:32:42 -- host/auth.sh@68 -- # keyid=0 00:19:05.317 03:32:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.317 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.317 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.317 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.317 03:32:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.317 03:32:42 -- nvmf/common.sh@717 -- # local ip 00:19:05.317 03:32:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.317 03:32:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.317 03:32:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.317 03:32:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.317 03:32:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.317 03:32:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.317 03:32:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.317 03:32:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.317 03:32:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.317 03:32:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:05.317 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.317 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.317 nvme0n1 00:19:05.317 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.317 03:32:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.317 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.317 03:32:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.317 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.317 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.575 03:32:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.575 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.575 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.575 03:32:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:05.575 03:32:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.575 03:32:42 -- host/auth.sh@44 -- # digest=sha384 00:19:05.575 03:32:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.575 03:32:42 -- host/auth.sh@44 -- # keyid=1 00:19:05.575 03:32:42 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:05.575 03:32:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.575 03:32:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.575 03:32:42 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:05.575 03:32:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:05.575 03:32:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.575 03:32:42 -- host/auth.sh@68 -- # digest=sha384 00:19:05.575 03:32:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.575 03:32:42 -- host/auth.sh@68 -- # keyid=1 00:19:05.575 03:32:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.575 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.575 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 03:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.575 03:32:42 -- nvmf/common.sh@717 -- # local ip 00:19:05.575 03:32:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.575 03:32:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.575 03:32:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.575 03:32:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.575 03:32:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.575 03:32:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.575 03:32:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.575 03:32:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.575 03:32:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.575 03:32:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:05.575 03:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.575 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 nvme0n1 00:19:05.575 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.575 03:32:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.575 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.575 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.575 03:32:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.575 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.575 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.575 03:32:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:05.575 03:32:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.575 03:32:43 -- host/auth.sh@44 -- # digest=sha384 00:19:05.575 03:32:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.575 03:32:43 -- host/auth.sh@44 -- # keyid=2 00:19:05.575 03:32:43 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:05.575 03:32:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.575 03:32:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.575 03:32:43 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:05.575 03:32:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:05.575 03:32:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.575 03:32:43 -- host/auth.sh@68 -- # digest=sha384 00:19:05.575 03:32:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.575 03:32:43 -- host/auth.sh@68 -- # keyid=2 00:19:05.575 03:32:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.575 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.575 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.575 03:32:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.575 03:32:43 -- nvmf/common.sh@717 -- # local ip 00:19:05.575 03:32:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.576 03:32:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.576 03:32:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.576 03:32:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.576 03:32:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.576 03:32:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.576 03:32:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.576 03:32:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.576 03:32:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.576 03:32:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:05.576 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.576 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.834 nvme0n1 00:19:05.834 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.834 03:32:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.834 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.835 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 03:32:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.835 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.835 03:32:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.835 03:32:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.835 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.835 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.835 03:32:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.835 03:32:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:05.835 03:32:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.835 03:32:43 -- host/auth.sh@44 -- # digest=sha384 00:19:05.835 03:32:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.835 03:32:43 -- host/auth.sh@44 -- # keyid=3 00:19:05.835 03:32:43 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:05.835 03:32:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.835 03:32:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.835 03:32:43 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:05.835 03:32:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:05.835 03:32:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.835 03:32:43 -- host/auth.sh@68 -- # digest=sha384 00:19:05.835 03:32:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.835 03:32:43 -- host/auth.sh@68 -- # keyid=3 00:19:05.835 03:32:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.835 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.835 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.835 03:32:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.835 03:32:43 -- nvmf/common.sh@717 -- # local ip 00:19:05.835 03:32:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.835 03:32:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.835 03:32:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.835 03:32:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.835 03:32:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.835 03:32:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.835 03:32:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.835 03:32:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.835 03:32:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.835 03:32:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:05.835 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.835 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 nvme0n1 00:19:06.094 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.094 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.094 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 03:32:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.094 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.094 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.094 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.094 03:32:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:06.094 03:32:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.094 03:32:43 -- host/auth.sh@44 -- # digest=sha384 00:19:06.094 03:32:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.094 03:32:43 -- host/auth.sh@44 -- # keyid=4 00:19:06.094 03:32:43 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:06.094 03:32:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.094 03:32:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:06.094 03:32:43 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:06.094 03:32:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:06.094 03:32:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.094 03:32:43 -- host/auth.sh@68 -- # digest=sha384 00:19:06.094 03:32:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:06.094 03:32:43 -- host/auth.sh@68 -- # keyid=4 00:19:06.094 03:32:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.094 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.094 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.094 03:32:43 -- nvmf/common.sh@717 -- # local ip 00:19:06.094 03:32:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.094 03:32:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.094 03:32:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.094 03:32:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.094 03:32:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.094 03:32:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.094 03:32:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.094 03:32:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.094 03:32:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.094 03:32:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.094 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.094 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 nvme0n1 00:19:06.094 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.094 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.094 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 03:32:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.094 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.094 03:32:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.094 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.094 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.354 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.354 03:32:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.354 03:32:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.354 03:32:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:06.354 03:32:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.354 03:32:43 -- host/auth.sh@44 -- # digest=sha384 00:19:06.354 03:32:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.354 03:32:43 -- host/auth.sh@44 -- # keyid=0 00:19:06.354 03:32:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:06.354 03:32:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.354 03:32:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.354 03:32:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:06.354 03:32:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:06.354 03:32:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.354 03:32:43 -- host/auth.sh@68 -- # digest=sha384 00:19:06.354 03:32:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.354 03:32:43 -- host/auth.sh@68 -- # keyid=0 00:19:06.354 03:32:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.355 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.355 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.355 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.355 03:32:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.355 03:32:43 -- nvmf/common.sh@717 -- # local ip 00:19:06.355 03:32:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.355 03:32:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.355 03:32:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.355 03:32:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.355 03:32:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.355 03:32:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.355 03:32:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.355 03:32:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.355 03:32:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.355 03:32:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:06.355 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.355 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.355 nvme0n1 00:19:06.355 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.355 03:32:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.355 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.355 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.355 03:32:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.355 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.355 03:32:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.355 03:32:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.355 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.355 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.355 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.355 03:32:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.355 03:32:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:06.355 03:32:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.355 03:32:43 -- host/auth.sh@44 -- # digest=sha384 00:19:06.355 03:32:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.355 03:32:43 -- host/auth.sh@44 -- # keyid=1 00:19:06.355 03:32:43 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:06.355 03:32:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.355 03:32:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.355 03:32:43 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:06.355 03:32:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:06.355 03:32:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.355 03:32:43 -- host/auth.sh@68 -- # digest=sha384 00:19:06.355 03:32:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.355 03:32:43 -- host/auth.sh@68 -- # keyid=1 00:19:06.355 03:32:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.355 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.355 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.355 03:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.355 03:32:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.355 03:32:43 -- nvmf/common.sh@717 -- # local ip 00:19:06.355 03:32:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.355 03:32:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.355 03:32:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.355 03:32:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.355 03:32:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.355 03:32:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.355 03:32:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.355 03:32:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.355 03:32:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.355 03:32:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:06.355 03:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.355 03:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:06.613 nvme0n1 00:19:06.613 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.613 03:32:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.613 03:32:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.613 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.613 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.613 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.613 03:32:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.613 03:32:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.613 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.613 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.613 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.613 03:32:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.613 03:32:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:06.613 03:32:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.613 03:32:44 -- host/auth.sh@44 -- # digest=sha384 00:19:06.613 03:32:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.613 03:32:44 -- host/auth.sh@44 -- # keyid=2 00:19:06.613 03:32:44 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:06.613 03:32:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.613 03:32:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.613 03:32:44 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:06.613 03:32:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:06.613 03:32:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.613 03:32:44 -- host/auth.sh@68 -- # digest=sha384 00:19:06.613 03:32:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.613 03:32:44 -- host/auth.sh@68 -- # keyid=2 00:19:06.613 03:32:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.613 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.613 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.613 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.613 03:32:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.613 03:32:44 -- nvmf/common.sh@717 -- # local ip 00:19:06.613 03:32:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.613 03:32:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.613 03:32:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.613 03:32:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.613 03:32:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.613 03:32:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.613 03:32:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.613 03:32:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.613 03:32:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.613 03:32:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:06.613 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.613 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.871 nvme0n1 00:19:06.871 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.871 03:32:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.871 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.871 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.871 03:32:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.871 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.871 03:32:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.871 03:32:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.871 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.871 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.871 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.871 03:32:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.871 03:32:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:06.871 03:32:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.871 03:32:44 -- host/auth.sh@44 -- # digest=sha384 00:19:06.871 03:32:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.871 03:32:44 -- host/auth.sh@44 -- # keyid=3 00:19:06.871 03:32:44 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:06.871 03:32:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.871 03:32:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.872 03:32:44 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:06.872 03:32:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:06.872 03:32:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.872 03:32:44 -- host/auth.sh@68 -- # digest=sha384 00:19:06.872 03:32:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.872 03:32:44 -- host/auth.sh@68 -- # keyid=3 00:19:06.872 03:32:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.872 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.872 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.872 03:32:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.872 03:32:44 -- nvmf/common.sh@717 -- # local ip 00:19:06.872 03:32:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.872 03:32:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.872 03:32:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.872 03:32:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.872 03:32:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.872 03:32:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.872 03:32:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.872 03:32:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.872 03:32:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.872 03:32:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:06.872 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.872 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 nvme0n1 00:19:06.872 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.130 03:32:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:07.130 03:32:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.130 03:32:44 -- host/auth.sh@44 -- # digest=sha384 00:19:07.130 03:32:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:07.130 03:32:44 -- host/auth.sh@44 -- # keyid=4 00:19:07.130 03:32:44 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:07.130 03:32:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.130 03:32:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:07.130 03:32:44 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:07.130 03:32:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:07.130 03:32:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.130 03:32:44 -- host/auth.sh@68 -- # digest=sha384 00:19:07.130 03:32:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:07.130 03:32:44 -- host/auth.sh@68 -- # keyid=4 00:19:07.130 03:32:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.130 03:32:44 -- nvmf/common.sh@717 -- # local ip 00:19:07.130 03:32:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.130 03:32:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.130 03:32:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.130 03:32:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.130 03:32:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.130 03:32:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.130 03:32:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.130 03:32:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.130 03:32:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.130 03:32:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 nvme0n1 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.130 03:32:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.130 03:32:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.130 03:32:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:07.130 03:32:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.130 03:32:44 -- host/auth.sh@44 -- # digest=sha384 00:19:07.130 03:32:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.130 03:32:44 -- host/auth.sh@44 -- # keyid=0 00:19:07.130 03:32:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:07.130 03:32:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.130 03:32:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.130 03:32:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:07.130 03:32:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:07.130 03:32:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.130 03:32:44 -- host/auth.sh@68 -- # digest=sha384 00:19:07.130 03:32:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.130 03:32:44 -- host/auth.sh@68 -- # keyid=0 00:19:07.130 03:32:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.130 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.130 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.130 03:32:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.388 03:32:44 -- nvmf/common.sh@717 -- # local ip 00:19:07.388 03:32:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.388 03:32:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.388 03:32:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.388 03:32:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.388 03:32:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.388 03:32:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.388 03:32:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.388 03:32:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.388 03:32:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.388 03:32:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:07.388 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.388 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.388 nvme0n1 00:19:07.388 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.388 03:32:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.388 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.388 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.388 03:32:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.388 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.648 03:32:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.648 03:32:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.648 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.648 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.648 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.648 03:32:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.648 03:32:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:07.648 03:32:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.648 03:32:44 -- host/auth.sh@44 -- # digest=sha384 00:19:07.648 03:32:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.648 03:32:44 -- host/auth.sh@44 -- # keyid=1 00:19:07.648 03:32:44 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:07.648 03:32:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.648 03:32:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.648 03:32:44 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:07.648 03:32:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:07.648 03:32:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.648 03:32:44 -- host/auth.sh@68 -- # digest=sha384 00:19:07.648 03:32:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.648 03:32:44 -- host/auth.sh@68 -- # keyid=1 00:19:07.648 03:32:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.648 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.648 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.648 03:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.648 03:32:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.648 03:32:44 -- nvmf/common.sh@717 -- # local ip 00:19:07.648 03:32:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.648 03:32:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.648 03:32:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.648 03:32:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.648 03:32:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.648 03:32:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.648 03:32:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.648 03:32:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.648 03:32:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.648 03:32:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:07.648 03:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.648 03:32:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.907 nvme0n1 00:19:07.907 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.907 03:32:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.907 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.907 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:07.907 03:32:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.907 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.907 03:32:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.907 03:32:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.907 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.907 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:07.907 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.907 03:32:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.907 03:32:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:07.907 03:32:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.907 03:32:45 -- host/auth.sh@44 -- # digest=sha384 00:19:07.907 03:32:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.907 03:32:45 -- host/auth.sh@44 -- # keyid=2 00:19:07.907 03:32:45 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:07.907 03:32:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.907 03:32:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.907 03:32:45 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:07.907 03:32:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:07.907 03:32:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.907 03:32:45 -- host/auth.sh@68 -- # digest=sha384 00:19:07.907 03:32:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.907 03:32:45 -- host/auth.sh@68 -- # keyid=2 00:19:07.907 03:32:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.907 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.907 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:07.907 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.907 03:32:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.907 03:32:45 -- nvmf/common.sh@717 -- # local ip 00:19:07.907 03:32:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.907 03:32:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.907 03:32:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.907 03:32:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.907 03:32:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.907 03:32:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.907 03:32:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.907 03:32:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.907 03:32:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.907 03:32:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:07.907 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.907 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.166 nvme0n1 00:19:08.166 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.166 03:32:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.166 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.166 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.166 03:32:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.166 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.166 03:32:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.166 03:32:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.166 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.166 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.166 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.166 03:32:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.166 03:32:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:08.166 03:32:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.166 03:32:45 -- host/auth.sh@44 -- # digest=sha384 00:19:08.166 03:32:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.166 03:32:45 -- host/auth.sh@44 -- # keyid=3 00:19:08.166 03:32:45 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:08.166 03:32:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.166 03:32:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.166 03:32:45 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:08.166 03:32:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:08.166 03:32:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.166 03:32:45 -- host/auth.sh@68 -- # digest=sha384 00:19:08.166 03:32:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.166 03:32:45 -- host/auth.sh@68 -- # keyid=3 00:19:08.166 03:32:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.166 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.166 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.166 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.166 03:32:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.166 03:32:45 -- nvmf/common.sh@717 -- # local ip 00:19:08.166 03:32:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.166 03:32:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.166 03:32:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.166 03:32:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.166 03:32:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.166 03:32:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.166 03:32:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.166 03:32:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.166 03:32:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.166 03:32:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:08.166 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.166 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.427 nvme0n1 00:19:08.427 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.427 03:32:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.427 03:32:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.427 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.427 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.427 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.427 03:32:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.427 03:32:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.427 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.427 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.427 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.427 03:32:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.427 03:32:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:08.427 03:32:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.427 03:32:45 -- host/auth.sh@44 -- # digest=sha384 00:19:08.427 03:32:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.427 03:32:45 -- host/auth.sh@44 -- # keyid=4 00:19:08.427 03:32:45 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:08.427 03:32:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.427 03:32:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.427 03:32:45 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:08.427 03:32:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:08.427 03:32:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.427 03:32:45 -- host/auth.sh@68 -- # digest=sha384 00:19:08.427 03:32:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.427 03:32:45 -- host/auth.sh@68 -- # keyid=4 00:19:08.427 03:32:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.427 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.427 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.427 03:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.427 03:32:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.427 03:32:45 -- nvmf/common.sh@717 -- # local ip 00:19:08.427 03:32:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.427 03:32:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.427 03:32:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.427 03:32:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.427 03:32:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.427 03:32:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.427 03:32:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.427 03:32:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.427 03:32:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.427 03:32:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.427 03:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.427 03:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.687 nvme0n1 00:19:08.688 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.688 03:32:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.688 03:32:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.688 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.688 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:08.688 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.688 03:32:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.688 03:32:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.688 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.688 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:08.688 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.688 03:32:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.688 03:32:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.688 03:32:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:08.688 03:32:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.688 03:32:46 -- host/auth.sh@44 -- # digest=sha384 00:19:08.688 03:32:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.688 03:32:46 -- host/auth.sh@44 -- # keyid=0 00:19:08.688 03:32:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:08.688 03:32:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.688 03:32:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:08.688 03:32:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:08.688 03:32:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:08.688 03:32:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.688 03:32:46 -- host/auth.sh@68 -- # digest=sha384 00:19:08.688 03:32:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:08.688 03:32:46 -- host/auth.sh@68 -- # keyid=0 00:19:08.688 03:32:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.688 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.688 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:08.688 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.688 03:32:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.688 03:32:46 -- nvmf/common.sh@717 -- # local ip 00:19:08.688 03:32:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.688 03:32:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.688 03:32:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.688 03:32:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.688 03:32:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.688 03:32:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.688 03:32:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.688 03:32:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.688 03:32:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.688 03:32:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:08.688 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.688 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:09.257 nvme0n1 00:19:09.257 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.258 03:32:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.258 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.258 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:09.258 03:32:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.258 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.258 03:32:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.258 03:32:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.258 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.258 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:09.258 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.258 03:32:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:09.258 03:32:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:09.258 03:32:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.258 03:32:46 -- host/auth.sh@44 -- # digest=sha384 00:19:09.258 03:32:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:09.258 03:32:46 -- host/auth.sh@44 -- # keyid=1 00:19:09.258 03:32:46 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:09.258 03:32:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:09.258 03:32:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:09.258 03:32:46 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:09.258 03:32:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:09.258 03:32:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.258 03:32:46 -- host/auth.sh@68 -- # digest=sha384 00:19:09.258 03:32:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:09.258 03:32:46 -- host/auth.sh@68 -- # keyid=1 00:19:09.258 03:32:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.258 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.258 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:09.258 03:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.258 03:32:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.258 03:32:46 -- nvmf/common.sh@717 -- # local ip 00:19:09.258 03:32:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.258 03:32:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.258 03:32:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.258 03:32:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.258 03:32:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:09.258 03:32:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.258 03:32:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:09.258 03:32:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:09.258 03:32:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:09.258 03:32:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:09.258 03:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.258 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:19:09.829 nvme0n1 00:19:09.829 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.829 03:32:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.829 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.829 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:09.829 03:32:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.829 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.829 03:32:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.829 03:32:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.829 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.829 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:09.829 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.829 03:32:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:09.829 03:32:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:09.829 03:32:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.829 03:32:47 -- host/auth.sh@44 -- # digest=sha384 00:19:09.829 03:32:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:09.829 03:32:47 -- host/auth.sh@44 -- # keyid=2 00:19:09.829 03:32:47 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:09.829 03:32:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:09.829 03:32:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:09.829 03:32:47 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:09.829 03:32:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:09.829 03:32:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.829 03:32:47 -- host/auth.sh@68 -- # digest=sha384 00:19:09.829 03:32:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:09.829 03:32:47 -- host/auth.sh@68 -- # keyid=2 00:19:09.829 03:32:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.829 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.829 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:09.829 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.829 03:32:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.829 03:32:47 -- nvmf/common.sh@717 -- # local ip 00:19:09.829 03:32:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.829 03:32:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.829 03:32:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.829 03:32:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.829 03:32:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:09.829 03:32:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.829 03:32:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:09.829 03:32:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:09.829 03:32:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:09.829 03:32:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:09.829 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.829 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.087 nvme0n1 00:19:10.087 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.087 03:32:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.087 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.087 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.087 03:32:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.087 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.087 03:32:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.087 03:32:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.087 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.087 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.346 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.346 03:32:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.346 03:32:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:10.346 03:32:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.346 03:32:47 -- host/auth.sh@44 -- # digest=sha384 00:19:10.346 03:32:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.346 03:32:47 -- host/auth.sh@44 -- # keyid=3 00:19:10.346 03:32:47 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:10.346 03:32:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.346 03:32:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:10.346 03:32:47 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:10.346 03:32:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:10.346 03:32:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.346 03:32:47 -- host/auth.sh@68 -- # digest=sha384 00:19:10.346 03:32:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:10.346 03:32:47 -- host/auth.sh@68 -- # keyid=3 00:19:10.346 03:32:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.346 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.346 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.346 03:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.346 03:32:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.346 03:32:47 -- nvmf/common.sh@717 -- # local ip 00:19:10.346 03:32:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.346 03:32:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.346 03:32:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.346 03:32:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.346 03:32:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.346 03:32:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.346 03:32:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.346 03:32:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.346 03:32:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.346 03:32:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:10.346 03:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.346 03:32:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.604 nvme0n1 00:19:10.604 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.604 03:32:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.604 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.604 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:10.604 03:32:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.604 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.604 03:32:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.604 03:32:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.604 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.604 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:10.604 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.604 03:32:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.604 03:32:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:10.604 03:32:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.604 03:32:48 -- host/auth.sh@44 -- # digest=sha384 00:19:10.604 03:32:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.604 03:32:48 -- host/auth.sh@44 -- # keyid=4 00:19:10.604 03:32:48 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:10.604 03:32:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.604 03:32:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:10.604 03:32:48 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:10.604 03:32:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:10.604 03:32:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.604 03:32:48 -- host/auth.sh@68 -- # digest=sha384 00:19:10.604 03:32:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:10.604 03:32:48 -- host/auth.sh@68 -- # keyid=4 00:19:10.604 03:32:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.604 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.604 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:10.604 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.604 03:32:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.604 03:32:48 -- nvmf/common.sh@717 -- # local ip 00:19:10.604 03:32:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.604 03:32:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.604 03:32:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.604 03:32:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.604 03:32:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.604 03:32:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.604 03:32:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.876 03:32:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.876 03:32:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.876 03:32:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:10.876 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.876 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:11.168 nvme0n1 00:19:11.168 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.168 03:32:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.168 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.168 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:11.168 03:32:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.168 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.168 03:32:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.168 03:32:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.168 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.168 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:11.168 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.168 03:32:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.168 03:32:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.168 03:32:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:11.168 03:32:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.168 03:32:48 -- host/auth.sh@44 -- # digest=sha384 00:19:11.168 03:32:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:11.168 03:32:48 -- host/auth.sh@44 -- # keyid=0 00:19:11.168 03:32:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:11.168 03:32:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:11.168 03:32:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:11.168 03:32:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:11.168 03:32:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:11.168 03:32:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.168 03:32:48 -- host/auth.sh@68 -- # digest=sha384 00:19:11.168 03:32:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:11.168 03:32:48 -- host/auth.sh@68 -- # keyid=0 00:19:11.168 03:32:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.168 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.168 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:11.168 03:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.168 03:32:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.168 03:32:48 -- nvmf/common.sh@717 -- # local ip 00:19:11.168 03:32:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.168 03:32:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.168 03:32:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.168 03:32:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.168 03:32:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.168 03:32:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.168 03:32:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.168 03:32:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.168 03:32:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.168 03:32:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:11.168 03:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.168 03:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:12.114 nvme0n1 00:19:12.114 03:32:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.114 03:32:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.114 03:32:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.114 03:32:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.114 03:32:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.114 03:32:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.114 03:32:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.114 03:32:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.114 03:32:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.114 03:32:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.114 03:32:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.114 03:32:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.114 03:32:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:12.114 03:32:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.114 03:32:49 -- host/auth.sh@44 -- # digest=sha384 00:19:12.114 03:32:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:12.114 03:32:49 -- host/auth.sh@44 -- # keyid=1 00:19:12.114 03:32:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:12.114 03:32:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:12.114 03:32:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:12.114 03:32:49 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:12.114 03:32:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:12.114 03:32:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.114 03:32:49 -- host/auth.sh@68 -- # digest=sha384 00:19:12.114 03:32:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:12.114 03:32:49 -- host/auth.sh@68 -- # keyid=1 00:19:12.114 03:32:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:12.114 03:32:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.114 03:32:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.114 03:32:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.114 03:32:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.114 03:32:49 -- nvmf/common.sh@717 -- # local ip 00:19:12.114 03:32:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.114 03:32:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.114 03:32:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.114 03:32:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.114 03:32:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.114 03:32:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.114 03:32:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.114 03:32:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.114 03:32:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.114 03:32:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:12.114 03:32:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.114 03:32:49 -- common/autotest_common.sh@10 -- # set +x 00:19:13.049 nvme0n1 00:19:13.049 03:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.049 03:32:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.049 03:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.049 03:32:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.049 03:32:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.049 03:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.049 03:32:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.049 03:32:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.049 03:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.049 03:32:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.049 03:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.049 03:32:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.049 03:32:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:13.049 03:32:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.049 03:32:50 -- host/auth.sh@44 -- # digest=sha384 00:19:13.049 03:32:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.049 03:32:50 -- host/auth.sh@44 -- # keyid=2 00:19:13.049 03:32:50 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:13.049 03:32:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:13.049 03:32:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:13.049 03:32:50 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:13.049 03:32:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:13.050 03:32:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.050 03:32:50 -- host/auth.sh@68 -- # digest=sha384 00:19:13.050 03:32:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:13.050 03:32:50 -- host/auth.sh@68 -- # keyid=2 00:19:13.050 03:32:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.050 03:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.050 03:32:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.050 03:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.050 03:32:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.050 03:32:50 -- nvmf/common.sh@717 -- # local ip 00:19:13.050 03:32:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.050 03:32:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.050 03:32:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.050 03:32:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.050 03:32:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.050 03:32:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.307 03:32:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.307 03:32:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.307 03:32:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.307 03:32:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:13.307 03:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.307 03:32:50 -- common/autotest_common.sh@10 -- # set +x 00:19:14.274 nvme0n1 00:19:14.274 03:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.274 03:32:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.274 03:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.274 03:32:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.274 03:32:51 -- common/autotest_common.sh@10 -- # set +x 00:19:14.274 03:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.274 03:32:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.275 03:32:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.275 03:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.275 03:32:51 -- common/autotest_common.sh@10 -- # set +x 00:19:14.275 03:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.275 03:32:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.275 03:32:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:14.275 03:32:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.275 03:32:51 -- host/auth.sh@44 -- # digest=sha384 00:19:14.275 03:32:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.275 03:32:51 -- host/auth.sh@44 -- # keyid=3 00:19:14.275 03:32:51 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:14.275 03:32:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:14.275 03:32:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:14.275 03:32:51 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:14.275 03:32:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:14.275 03:32:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.275 03:32:51 -- host/auth.sh@68 -- # digest=sha384 00:19:14.275 03:32:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:14.275 03:32:51 -- host/auth.sh@68 -- # keyid=3 00:19:14.275 03:32:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.275 03:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.275 03:32:51 -- common/autotest_common.sh@10 -- # set +x 00:19:14.275 03:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.275 03:32:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.275 03:32:51 -- nvmf/common.sh@717 -- # local ip 00:19:14.275 03:32:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.275 03:32:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.275 03:32:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.275 03:32:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.275 03:32:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.275 03:32:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.275 03:32:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.275 03:32:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.275 03:32:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.275 03:32:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:14.275 03:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.275 03:32:51 -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 nvme0n1 00:19:15.209 03:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.209 03:32:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.209 03:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.209 03:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 03:32:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.209 03:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.209 03:32:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.209 03:32:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.209 03:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.209 03:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 03:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.209 03:32:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.209 03:32:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:15.209 03:32:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.209 03:32:52 -- host/auth.sh@44 -- # digest=sha384 00:19:15.209 03:32:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:15.209 03:32:52 -- host/auth.sh@44 -- # keyid=4 00:19:15.209 03:32:52 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:15.209 03:32:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:15.209 03:32:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:15.209 03:32:52 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:15.209 03:32:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:15.209 03:32:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.209 03:32:52 -- host/auth.sh@68 -- # digest=sha384 00:19:15.209 03:32:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:15.209 03:32:52 -- host/auth.sh@68 -- # keyid=4 00:19:15.209 03:32:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.209 03:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.209 03:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 03:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.209 03:32:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.209 03:32:52 -- nvmf/common.sh@717 -- # local ip 00:19:15.209 03:32:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.209 03:32:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.209 03:32:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.209 03:32:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.209 03:32:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:15.209 03:32:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.209 03:32:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:15.209 03:32:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:15.209 03:32:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:15.209 03:32:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:15.210 03:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.210 03:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:16.143 nvme0n1 00:19:16.143 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.143 03:32:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.143 03:32:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.143 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.143 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.143 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.143 03:32:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.143 03:32:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.143 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.143 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.143 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.143 03:32:53 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:16.143 03:32:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.143 03:32:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.143 03:32:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:16.143 03:32:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.143 03:32:53 -- host/auth.sh@44 -- # digest=sha512 00:19:16.143 03:32:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.143 03:32:53 -- host/auth.sh@44 -- # keyid=0 00:19:16.143 03:32:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:16.144 03:32:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.144 03:32:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:16.144 03:32:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:16.144 03:32:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:16.144 03:32:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.144 03:32:53 -- host/auth.sh@68 -- # digest=sha512 00:19:16.144 03:32:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:16.144 03:32:53 -- host/auth.sh@68 -- # keyid=0 00:19:16.144 03:32:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.144 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.144 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.144 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.144 03:32:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.144 03:32:53 -- nvmf/common.sh@717 -- # local ip 00:19:16.144 03:32:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.144 03:32:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.144 03:32:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.144 03:32:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.144 03:32:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.144 03:32:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.144 03:32:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.144 03:32:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.144 03:32:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.144 03:32:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:16.144 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.144 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 nvme0n1 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 03:32:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.402 03:32:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:16.402 03:32:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.402 03:32:53 -- host/auth.sh@44 -- # digest=sha512 00:19:16.402 03:32:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.402 03:32:53 -- host/auth.sh@44 -- # keyid=1 00:19:16.402 03:32:53 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:16.402 03:32:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.402 03:32:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:16.402 03:32:53 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:16.402 03:32:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:16.402 03:32:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.402 03:32:53 -- host/auth.sh@68 -- # digest=sha512 00:19:16.402 03:32:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:16.402 03:32:53 -- host/auth.sh@68 -- # keyid=1 00:19:16.402 03:32:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.402 03:32:53 -- nvmf/common.sh@717 -- # local ip 00:19:16.402 03:32:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.402 03:32:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.402 03:32:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.402 03:32:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.402 03:32:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.402 03:32:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.402 03:32:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.402 03:32:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.402 03:32:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.402 03:32:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 nvme0n1 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 03:32:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.402 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.402 03:32:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.402 03:32:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:16.402 03:32:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.402 03:32:53 -- host/auth.sh@44 -- # digest=sha512 00:19:16.402 03:32:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.402 03:32:53 -- host/auth.sh@44 -- # keyid=2 00:19:16.402 03:32:53 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:16.402 03:32:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.402 03:32:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:16.402 03:32:53 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:16.402 03:32:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:16.402 03:32:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.402 03:32:53 -- host/auth.sh@68 -- # digest=sha512 00:19:16.402 03:32:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:16.402 03:32:53 -- host/auth.sh@68 -- # keyid=2 00:19:16.402 03:32:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.402 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.402 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.660 03:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.660 03:32:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.660 03:32:53 -- nvmf/common.sh@717 -- # local ip 00:19:16.660 03:32:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.660 03:32:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.660 03:32:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.660 03:32:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.660 03:32:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.660 03:32:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.660 03:32:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.660 03:32:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.660 03:32:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.660 03:32:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:16.660 03:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.660 03:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.660 nvme0n1 00:19:16.660 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.660 03:32:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.660 03:32:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.660 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.660 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.660 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.660 03:32:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.660 03:32:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.660 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.660 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.660 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.660 03:32:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.660 03:32:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:16.660 03:32:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.660 03:32:54 -- host/auth.sh@44 -- # digest=sha512 00:19:16.660 03:32:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.660 03:32:54 -- host/auth.sh@44 -- # keyid=3 00:19:16.660 03:32:54 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:16.660 03:32:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.660 03:32:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:16.660 03:32:54 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:16.660 03:32:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:16.660 03:32:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.660 03:32:54 -- host/auth.sh@68 -- # digest=sha512 00:19:16.660 03:32:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:16.660 03:32:54 -- host/auth.sh@68 -- # keyid=3 00:19:16.660 03:32:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.660 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.660 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.660 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.660 03:32:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.660 03:32:54 -- nvmf/common.sh@717 -- # local ip 00:19:16.661 03:32:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.661 03:32:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.661 03:32:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.661 03:32:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.661 03:32:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.661 03:32:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.661 03:32:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.661 03:32:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.661 03:32:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.661 03:32:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:16.661 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.661 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.918 nvme0n1 00:19:16.918 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.918 03:32:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.918 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.918 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.918 03:32:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.918 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.918 03:32:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.918 03:32:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.918 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.918 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.918 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.918 03:32:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.918 03:32:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:16.919 03:32:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.919 03:32:54 -- host/auth.sh@44 -- # digest=sha512 00:19:16.919 03:32:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.919 03:32:54 -- host/auth.sh@44 -- # keyid=4 00:19:16.919 03:32:54 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:16.919 03:32:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.919 03:32:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:16.919 03:32:54 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:16.919 03:32:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:16.919 03:32:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.919 03:32:54 -- host/auth.sh@68 -- # digest=sha512 00:19:16.919 03:32:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:16.919 03:32:54 -- host/auth.sh@68 -- # keyid=4 00:19:16.919 03:32:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.919 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.919 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.919 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.919 03:32:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.919 03:32:54 -- nvmf/common.sh@717 -- # local ip 00:19:16.919 03:32:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.919 03:32:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.919 03:32:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.919 03:32:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.919 03:32:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.919 03:32:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.919 03:32:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.919 03:32:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.919 03:32:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.919 03:32:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.919 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.919 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.919 nvme0n1 00:19:16.919 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.919 03:32:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.919 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.919 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:16.919 03:32:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.919 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.177 03:32:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.177 03:32:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.177 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.177 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.177 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.177 03:32:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.177 03:32:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.177 03:32:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:17.177 03:32:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.177 03:32:54 -- host/auth.sh@44 -- # digest=sha512 00:19:17.177 03:32:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.177 03:32:54 -- host/auth.sh@44 -- # keyid=0 00:19:17.177 03:32:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:17.177 03:32:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.177 03:32:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:17.177 03:32:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:17.177 03:32:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:17.177 03:32:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.177 03:32:54 -- host/auth.sh@68 -- # digest=sha512 00:19:17.177 03:32:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:17.177 03:32:54 -- host/auth.sh@68 -- # keyid=0 00:19:17.177 03:32:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.177 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.177 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.177 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.177 03:32:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.178 03:32:54 -- nvmf/common.sh@717 -- # local ip 00:19:17.178 03:32:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.178 03:32:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.178 03:32:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.178 03:32:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.178 03:32:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.178 03:32:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.178 03:32:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.178 03:32:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.178 03:32:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.178 03:32:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:17.178 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.178 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.178 nvme0n1 00:19:17.178 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.178 03:32:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.178 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.178 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.178 03:32:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.178 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.178 03:32:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.178 03:32:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.178 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.178 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.436 03:32:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:17.436 03:32:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.436 03:32:54 -- host/auth.sh@44 -- # digest=sha512 00:19:17.436 03:32:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.436 03:32:54 -- host/auth.sh@44 -- # keyid=1 00:19:17.436 03:32:54 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:17.436 03:32:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.436 03:32:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:17.436 03:32:54 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:17.436 03:32:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:17.436 03:32:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.436 03:32:54 -- host/auth.sh@68 -- # digest=sha512 00:19:17.436 03:32:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:17.436 03:32:54 -- host/auth.sh@68 -- # keyid=1 00:19:17.436 03:32:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.436 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.436 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.436 03:32:54 -- nvmf/common.sh@717 -- # local ip 00:19:17.436 03:32:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.436 03:32:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.436 03:32:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.436 03:32:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.436 03:32:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.436 03:32:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.436 03:32:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.436 03:32:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.436 03:32:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.436 03:32:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:17.436 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.436 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 nvme0n1 00:19:17.436 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.436 03:32:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.436 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.436 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.436 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.436 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.436 03:32:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:17.436 03:32:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.436 03:32:54 -- host/auth.sh@44 -- # digest=sha512 00:19:17.436 03:32:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.436 03:32:54 -- host/auth.sh@44 -- # keyid=2 00:19:17.436 03:32:54 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:17.436 03:32:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.436 03:32:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:17.436 03:32:54 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:17.436 03:32:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:17.436 03:32:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.436 03:32:54 -- host/auth.sh@68 -- # digest=sha512 00:19:17.436 03:32:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:17.436 03:32:54 -- host/auth.sh@68 -- # keyid=2 00:19:17.436 03:32:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.436 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.436 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 03:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.436 03:32:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.436 03:32:54 -- nvmf/common.sh@717 -- # local ip 00:19:17.436 03:32:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.436 03:32:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.436 03:32:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.436 03:32:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.436 03:32:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.436 03:32:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.436 03:32:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.436 03:32:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.436 03:32:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.436 03:32:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:17.436 03:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.436 03:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:17.694 nvme0n1 00:19:17.694 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.694 03:32:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.694 03:32:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.694 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.694 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.694 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.694 03:32:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.694 03:32:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.694 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.694 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.694 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.694 03:32:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.694 03:32:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:17.694 03:32:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.694 03:32:55 -- host/auth.sh@44 -- # digest=sha512 00:19:17.694 03:32:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.694 03:32:55 -- host/auth.sh@44 -- # keyid=3 00:19:17.694 03:32:55 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:17.694 03:32:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.694 03:32:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:17.694 03:32:55 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:17.694 03:32:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:17.694 03:32:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.694 03:32:55 -- host/auth.sh@68 -- # digest=sha512 00:19:17.694 03:32:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:17.694 03:32:55 -- host/auth.sh@68 -- # keyid=3 00:19:17.694 03:32:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.694 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.694 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.694 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.694 03:32:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.694 03:32:55 -- nvmf/common.sh@717 -- # local ip 00:19:17.694 03:32:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.694 03:32:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.694 03:32:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.694 03:32:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.694 03:32:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.694 03:32:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.694 03:32:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.694 03:32:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.694 03:32:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.694 03:32:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:17.694 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.694 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.952 nvme0n1 00:19:17.953 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.953 03:32:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.953 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.953 03:32:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.953 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.953 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.953 03:32:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.953 03:32:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.953 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.953 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.953 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.953 03:32:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.953 03:32:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:17.953 03:32:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.953 03:32:55 -- host/auth.sh@44 -- # digest=sha512 00:19:17.953 03:32:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.953 03:32:55 -- host/auth.sh@44 -- # keyid=4 00:19:17.953 03:32:55 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:17.953 03:32:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.953 03:32:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:17.953 03:32:55 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:17.953 03:32:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:17.953 03:32:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.953 03:32:55 -- host/auth.sh@68 -- # digest=sha512 00:19:17.953 03:32:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:17.953 03:32:55 -- host/auth.sh@68 -- # keyid=4 00:19:17.953 03:32:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.953 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.953 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:17.953 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.953 03:32:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.953 03:32:55 -- nvmf/common.sh@717 -- # local ip 00:19:17.953 03:32:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.953 03:32:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.953 03:32:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.953 03:32:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.953 03:32:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.953 03:32:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.953 03:32:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.953 03:32:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.953 03:32:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.953 03:32:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.953 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.953 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.211 nvme0n1 00:19:18.211 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.211 03:32:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.211 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.211 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.211 03:32:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.211 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.211 03:32:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.211 03:32:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.211 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.211 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.211 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.211 03:32:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.211 03:32:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.211 03:32:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:18.211 03:32:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.211 03:32:55 -- host/auth.sh@44 -- # digest=sha512 00:19:18.211 03:32:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.211 03:32:55 -- host/auth.sh@44 -- # keyid=0 00:19:18.211 03:32:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:18.211 03:32:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.211 03:32:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:18.211 03:32:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:18.211 03:32:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:18.211 03:32:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.211 03:32:55 -- host/auth.sh@68 -- # digest=sha512 00:19:18.211 03:32:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:18.211 03:32:55 -- host/auth.sh@68 -- # keyid=0 00:19:18.212 03:32:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.212 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.212 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.212 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.212 03:32:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.212 03:32:55 -- nvmf/common.sh@717 -- # local ip 00:19:18.212 03:32:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.212 03:32:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.212 03:32:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.212 03:32:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.212 03:32:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.212 03:32:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.212 03:32:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.212 03:32:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.212 03:32:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.212 03:32:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:18.212 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.212 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.470 nvme0n1 00:19:18.470 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.470 03:32:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.470 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.470 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.470 03:32:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.470 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.470 03:32:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.470 03:32:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.470 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.470 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.470 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.470 03:32:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.470 03:32:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:18.470 03:32:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.470 03:32:55 -- host/auth.sh@44 -- # digest=sha512 00:19:18.470 03:32:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.470 03:32:55 -- host/auth.sh@44 -- # keyid=1 00:19:18.470 03:32:55 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:18.470 03:32:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.470 03:32:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:18.470 03:32:55 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:18.470 03:32:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:18.471 03:32:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.471 03:32:55 -- host/auth.sh@68 -- # digest=sha512 00:19:18.471 03:32:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:18.471 03:32:55 -- host/auth.sh@68 -- # keyid=1 00:19:18.471 03:32:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.471 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.471 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.471 03:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.471 03:32:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.471 03:32:55 -- nvmf/common.sh@717 -- # local ip 00:19:18.471 03:32:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.471 03:32:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.471 03:32:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.471 03:32:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.471 03:32:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.471 03:32:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.471 03:32:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.471 03:32:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.471 03:32:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.471 03:32:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:18.471 03:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.471 03:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:18.729 nvme0n1 00:19:18.729 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.729 03:32:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.729 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.729 03:32:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.729 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:18.729 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.729 03:32:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.729 03:32:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.729 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.729 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:18.729 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.729 03:32:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.729 03:32:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:18.729 03:32:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.729 03:32:56 -- host/auth.sh@44 -- # digest=sha512 00:19:18.729 03:32:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.730 03:32:56 -- host/auth.sh@44 -- # keyid=2 00:19:18.730 03:32:56 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:18.730 03:32:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.730 03:32:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:18.730 03:32:56 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:18.730 03:32:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:18.730 03:32:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.730 03:32:56 -- host/auth.sh@68 -- # digest=sha512 00:19:18.730 03:32:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:18.730 03:32:56 -- host/auth.sh@68 -- # keyid=2 00:19:18.730 03:32:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.730 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.730 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:18.730 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.730 03:32:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.730 03:32:56 -- nvmf/common.sh@717 -- # local ip 00:19:18.730 03:32:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.730 03:32:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.730 03:32:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.730 03:32:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.730 03:32:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.730 03:32:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.730 03:32:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.730 03:32:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.730 03:32:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.730 03:32:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:18.730 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.730 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:18.988 nvme0n1 00:19:18.988 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.988 03:32:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.988 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.988 03:32:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.988 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:18.988 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.988 03:32:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.988 03:32:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.988 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.988 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.247 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.247 03:32:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.247 03:32:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:19.247 03:32:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.247 03:32:56 -- host/auth.sh@44 -- # digest=sha512 00:19:19.247 03:32:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.247 03:32:56 -- host/auth.sh@44 -- # keyid=3 00:19:19.247 03:32:56 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:19.247 03:32:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.247 03:32:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:19.247 03:32:56 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:19.247 03:32:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:19.247 03:32:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.247 03:32:56 -- host/auth.sh@68 -- # digest=sha512 00:19:19.247 03:32:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:19.247 03:32:56 -- host/auth.sh@68 -- # keyid=3 00:19:19.247 03:32:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.247 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.247 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.247 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.247 03:32:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.247 03:32:56 -- nvmf/common.sh@717 -- # local ip 00:19:19.247 03:32:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.247 03:32:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.247 03:32:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.247 03:32:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.247 03:32:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.247 03:32:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.247 03:32:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.247 03:32:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.247 03:32:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.247 03:32:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:19.247 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.247 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.505 nvme0n1 00:19:19.505 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.505 03:32:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.505 03:32:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.505 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.505 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.505 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.505 03:32:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.505 03:32:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.505 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.505 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.505 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.505 03:32:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.505 03:32:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:19.505 03:32:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.505 03:32:56 -- host/auth.sh@44 -- # digest=sha512 00:19:19.505 03:32:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.505 03:32:56 -- host/auth.sh@44 -- # keyid=4 00:19:19.505 03:32:56 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:19.505 03:32:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.505 03:32:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:19.505 03:32:56 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:19.505 03:32:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:19.505 03:32:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.505 03:32:56 -- host/auth.sh@68 -- # digest=sha512 00:19:19.505 03:32:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:19.505 03:32:56 -- host/auth.sh@68 -- # keyid=4 00:19:19.505 03:32:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.505 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.505 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.505 03:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.505 03:32:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.505 03:32:56 -- nvmf/common.sh@717 -- # local ip 00:19:19.505 03:32:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.505 03:32:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.505 03:32:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.505 03:32:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.505 03:32:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.505 03:32:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.505 03:32:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.505 03:32:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.505 03:32:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.505 03:32:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.505 03:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.505 03:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:19.764 nvme0n1 00:19:19.764 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.764 03:32:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.764 03:32:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.764 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.764 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:19.764 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.764 03:32:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.764 03:32:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.764 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.764 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:19.764 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.764 03:32:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.764 03:32:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.764 03:32:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:19.764 03:32:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.764 03:32:57 -- host/auth.sh@44 -- # digest=sha512 00:19:19.765 03:32:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.765 03:32:57 -- host/auth.sh@44 -- # keyid=0 00:19:19.765 03:32:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:19.765 03:32:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.765 03:32:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:19.765 03:32:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:19.765 03:32:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:19.765 03:32:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.765 03:32:57 -- host/auth.sh@68 -- # digest=sha512 00:19:19.765 03:32:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:19.765 03:32:57 -- host/auth.sh@68 -- # keyid=0 00:19:19.765 03:32:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.765 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.765 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:19.765 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.765 03:32:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.765 03:32:57 -- nvmf/common.sh@717 -- # local ip 00:19:19.765 03:32:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.765 03:32:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.765 03:32:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.765 03:32:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.765 03:32:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.765 03:32:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.765 03:32:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.765 03:32:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.765 03:32:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.765 03:32:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:19.765 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.765 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.331 nvme0n1 00:19:20.331 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.331 03:32:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.331 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.331 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.331 03:32:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.331 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.331 03:32:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.331 03:32:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.331 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.331 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.331 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.331 03:32:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.331 03:32:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:20.331 03:32:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.331 03:32:57 -- host/auth.sh@44 -- # digest=sha512 00:19:20.331 03:32:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.331 03:32:57 -- host/auth.sh@44 -- # keyid=1 00:19:20.331 03:32:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:20.331 03:32:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.331 03:32:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:20.331 03:32:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:20.331 03:32:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:20.331 03:32:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.331 03:32:57 -- host/auth.sh@68 -- # digest=sha512 00:19:20.331 03:32:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:20.331 03:32:57 -- host/auth.sh@68 -- # keyid=1 00:19:20.331 03:32:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.331 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.331 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.331 03:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.331 03:32:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.331 03:32:57 -- nvmf/common.sh@717 -- # local ip 00:19:20.331 03:32:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.331 03:32:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.331 03:32:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.331 03:32:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.331 03:32:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.331 03:32:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.331 03:32:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.331 03:32:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.331 03:32:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.331 03:32:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:20.331 03:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.331 03:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.897 nvme0n1 00:19:20.897 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.897 03:32:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.897 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.897 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.897 03:32:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.897 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.897 03:32:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.897 03:32:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.897 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.897 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.897 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.897 03:32:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.897 03:32:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:20.897 03:32:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.897 03:32:58 -- host/auth.sh@44 -- # digest=sha512 00:19:20.897 03:32:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.897 03:32:58 -- host/auth.sh@44 -- # keyid=2 00:19:20.897 03:32:58 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:20.897 03:32:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.897 03:32:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:20.897 03:32:58 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:20.897 03:32:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:20.897 03:32:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.897 03:32:58 -- host/auth.sh@68 -- # digest=sha512 00:19:20.897 03:32:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:20.897 03:32:58 -- host/auth.sh@68 -- # keyid=2 00:19:20.897 03:32:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.897 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.898 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.898 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.898 03:32:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.898 03:32:58 -- nvmf/common.sh@717 -- # local ip 00:19:20.898 03:32:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.898 03:32:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.898 03:32:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.898 03:32:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.898 03:32:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.898 03:32:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.898 03:32:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.898 03:32:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.898 03:32:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.898 03:32:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:20.898 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.898 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:21.464 nvme0n1 00:19:21.464 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.464 03:32:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.464 03:32:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.464 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.464 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:21.464 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.464 03:32:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.464 03:32:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.464 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.464 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:21.464 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.464 03:32:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.464 03:32:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:21.464 03:32:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.464 03:32:58 -- host/auth.sh@44 -- # digest=sha512 00:19:21.464 03:32:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.464 03:32:58 -- host/auth.sh@44 -- # keyid=3 00:19:21.464 03:32:58 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:21.464 03:32:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:21.464 03:32:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:21.464 03:32:58 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:21.464 03:32:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:19:21.464 03:32:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.464 03:32:58 -- host/auth.sh@68 -- # digest=sha512 00:19:21.464 03:32:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:21.464 03:32:58 -- host/auth.sh@68 -- # keyid=3 00:19:21.464 03:32:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.464 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.464 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:21.464 03:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.464 03:32:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.464 03:32:58 -- nvmf/common.sh@717 -- # local ip 00:19:21.464 03:32:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.464 03:32:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.464 03:32:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.464 03:32:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.464 03:32:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:21.464 03:32:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.464 03:32:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:21.464 03:32:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:21.464 03:32:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:21.464 03:32:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:21.464 03:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.464 03:32:58 -- common/autotest_common.sh@10 -- # set +x 00:19:22.031 nvme0n1 00:19:22.031 03:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.031 03:32:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.031 03:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.031 03:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.031 03:32:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.031 03:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.031 03:32:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.031 03:32:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.031 03:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.031 03:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.031 03:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.031 03:32:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.031 03:32:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:22.031 03:32:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.031 03:32:59 -- host/auth.sh@44 -- # digest=sha512 00:19:22.031 03:32:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.031 03:32:59 -- host/auth.sh@44 -- # keyid=4 00:19:22.031 03:32:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:22.031 03:32:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:22.031 03:32:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:22.031 03:32:59 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:22.031 03:32:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:19:22.031 03:32:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.031 03:32:59 -- host/auth.sh@68 -- # digest=sha512 00:19:22.031 03:32:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:22.032 03:32:59 -- host/auth.sh@68 -- # keyid=4 00:19:22.032 03:32:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.032 03:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.032 03:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.032 03:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.032 03:32:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.032 03:32:59 -- nvmf/common.sh@717 -- # local ip 00:19:22.032 03:32:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.032 03:32:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.032 03:32:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.032 03:32:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.032 03:32:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.032 03:32:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.032 03:32:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.032 03:32:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.032 03:32:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.032 03:32:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:22.032 03:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.032 03:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.609 nvme0n1 00:19:22.609 03:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.609 03:33:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.609 03:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.610 03:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:22.610 03:33:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.610 03:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.610 03:33:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.610 03:33:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.610 03:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.610 03:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:22.610 03:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.610 03:33:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.610 03:33:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.610 03:33:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:22.610 03:33:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.610 03:33:00 -- host/auth.sh@44 -- # digest=sha512 00:19:22.610 03:33:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.610 03:33:00 -- host/auth.sh@44 -- # keyid=0 00:19:22.610 03:33:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:22.610 03:33:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:22.610 03:33:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:22.610 03:33:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MzYxYzA5OWVhZWEwZTAxZjVjMzExYzZiNzEyOGE4MjJrf1JV: 00:19:22.610 03:33:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:19:22.610 03:33:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.610 03:33:00 -- host/auth.sh@68 -- # digest=sha512 00:19:22.610 03:33:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:22.610 03:33:00 -- host/auth.sh@68 -- # keyid=0 00:19:22.610 03:33:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:22.610 03:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.610 03:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:22.610 03:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.610 03:33:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.610 03:33:00 -- nvmf/common.sh@717 -- # local ip 00:19:22.610 03:33:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.610 03:33:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.610 03:33:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.610 03:33:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.610 03:33:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.610 03:33:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.610 03:33:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.610 03:33:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.610 03:33:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.610 03:33:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:22.610 03:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.610 03:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:23.547 nvme0n1 00:19:23.547 03:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.547 03:33:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.547 03:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.547 03:33:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.547 03:33:01 -- common/autotest_common.sh@10 -- # set +x 00:19:23.547 03:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.547 03:33:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.547 03:33:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.547 03:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.547 03:33:01 -- common/autotest_common.sh@10 -- # set +x 00:19:23.547 03:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.547 03:33:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.547 03:33:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:23.547 03:33:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.547 03:33:01 -- host/auth.sh@44 -- # digest=sha512 00:19:23.547 03:33:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.547 03:33:01 -- host/auth.sh@44 -- # keyid=1 00:19:23.547 03:33:01 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:23.547 03:33:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.547 03:33:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:23.806 03:33:01 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:23.806 03:33:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:19:23.806 03:33:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.806 03:33:01 -- host/auth.sh@68 -- # digest=sha512 00:19:23.806 03:33:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:23.806 03:33:01 -- host/auth.sh@68 -- # keyid=1 00:19:23.806 03:33:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.806 03:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.807 03:33:01 -- common/autotest_common.sh@10 -- # set +x 00:19:23.807 03:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.807 03:33:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.807 03:33:01 -- nvmf/common.sh@717 -- # local ip 00:19:23.807 03:33:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.807 03:33:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.807 03:33:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.807 03:33:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.807 03:33:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.807 03:33:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.807 03:33:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.807 03:33:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.807 03:33:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.807 03:33:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:23.807 03:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.807 03:33:01 -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 nvme0n1 00:19:24.741 03:33:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.741 03:33:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.741 03:33:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.741 03:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.741 03:33:02 -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 03:33:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.741 03:33:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.741 03:33:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.742 03:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.742 03:33:02 -- common/autotest_common.sh@10 -- # set +x 00:19:24.742 03:33:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.742 03:33:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.742 03:33:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:24.742 03:33:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.742 03:33:02 -- host/auth.sh@44 -- # digest=sha512 00:19:24.742 03:33:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:24.742 03:33:02 -- host/auth.sh@44 -- # keyid=2 00:19:24.742 03:33:02 -- host/auth.sh@45 -- # key=DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:24.742 03:33:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:24.742 03:33:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:24.742 03:33:02 -- host/auth.sh@49 -- # echo DHHC-1:01:YjkxZTA3ODY3MjJkMzYxMmRjNTlmZmUzODkxY2FiZWJC2kjC: 00:19:24.742 03:33:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:19:24.742 03:33:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.742 03:33:02 -- host/auth.sh@68 -- # digest=sha512 00:19:24.742 03:33:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:24.742 03:33:02 -- host/auth.sh@68 -- # keyid=2 00:19:24.742 03:33:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.742 03:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.742 03:33:02 -- common/autotest_common.sh@10 -- # set +x 00:19:24.742 03:33:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.742 03:33:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.742 03:33:02 -- nvmf/common.sh@717 -- # local ip 00:19:24.742 03:33:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.742 03:33:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.742 03:33:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.742 03:33:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.742 03:33:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.742 03:33:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.742 03:33:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.742 03:33:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.742 03:33:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.742 03:33:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:24.742 03:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.742 03:33:02 -- common/autotest_common.sh@10 -- # set +x 00:19:25.676 nvme0n1 00:19:25.676 03:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.676 03:33:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.676 03:33:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.676 03:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.676 03:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:25.676 03:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.676 03:33:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.676 03:33:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.676 03:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.676 03:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:25.676 03:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.676 03:33:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.676 03:33:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:25.676 03:33:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.676 03:33:03 -- host/auth.sh@44 -- # digest=sha512 00:19:25.676 03:33:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:25.676 03:33:03 -- host/auth.sh@44 -- # keyid=3 00:19:25.676 03:33:03 -- host/auth.sh@45 -- # key=DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:25.676 03:33:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:25.676 03:33:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:25.676 03:33:03 -- host/auth.sh@49 -- # echo DHHC-1:02:NDkxOGRmN2RiZGM5NzI3NDQ0OTc4MGMyMDc0MTYwZGYwOTY0MGE1NWY4MWY1NThl6N8qWA==: 00:19:25.676 03:33:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:19:25.676 03:33:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.676 03:33:03 -- host/auth.sh@68 -- # digest=sha512 00:19:25.676 03:33:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:25.676 03:33:03 -- host/auth.sh@68 -- # keyid=3 00:19:25.676 03:33:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.676 03:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.676 03:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:25.676 03:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.676 03:33:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.676 03:33:03 -- nvmf/common.sh@717 -- # local ip 00:19:25.676 03:33:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.676 03:33:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.677 03:33:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.677 03:33:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.677 03:33:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.677 03:33:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.677 03:33:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.677 03:33:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.677 03:33:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.677 03:33:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:25.677 03:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.677 03:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:26.612 nvme0n1 00:19:26.612 03:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.612 03:33:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.612 03:33:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.612 03:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.612 03:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:26.612 03:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.612 03:33:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.612 03:33:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.612 03:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.612 03:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:26.612 03:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.612 03:33:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.612 03:33:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:26.612 03:33:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.612 03:33:04 -- host/auth.sh@44 -- # digest=sha512 00:19:26.612 03:33:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:26.612 03:33:04 -- host/auth.sh@44 -- # keyid=4 00:19:26.612 03:33:04 -- host/auth.sh@45 -- # key=DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:26.612 03:33:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:26.612 03:33:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:26.612 03:33:04 -- host/auth.sh@49 -- # echo DHHC-1:03:YWQ5ZjNhZmY3ZThlNTI0NjI3MDQxOTM0Y2NmYzU3OWUwYWY1ZTIzMDhmMGNmZjJlZGQ2MWIwY2ZkOTkyZTM3Mm57pyY=: 00:19:26.612 03:33:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:19:26.612 03:33:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.612 03:33:04 -- host/auth.sh@68 -- # digest=sha512 00:19:26.612 03:33:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:26.612 03:33:04 -- host/auth.sh@68 -- # keyid=4 00:19:26.612 03:33:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.612 03:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.612 03:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:26.612 03:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.870 03:33:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.870 03:33:04 -- nvmf/common.sh@717 -- # local ip 00:19:26.870 03:33:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.870 03:33:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.870 03:33:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.870 03:33:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.870 03:33:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:26.870 03:33:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.870 03:33:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:26.870 03:33:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:26.870 03:33:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:26.870 03:33:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.870 03:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.870 03:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 nvme0n1 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 03:33:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:27.838 03:33:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.838 03:33:05 -- host/auth.sh@44 -- # digest=sha256 00:19:27.838 03:33:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.838 03:33:05 -- host/auth.sh@44 -- # keyid=1 00:19:27.838 03:33:05 -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:27.838 03:33:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:27.838 03:33:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:27.838 03:33:05 -- host/auth.sh@49 -- # echo DHHC-1:00:YmNmYjRjMGYyNDJiNDQ2YzgzNjAxODEyY2Q1NWE2M2ExM2JmZmZjMzVkZGFhYTgzUC0hew==: 00:19:27.838 03:33:05 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@119 -- # get_main_ns_ip 00:19:27.838 03:33:05 -- nvmf/common.sh@717 -- # local ip 00:19:27.838 03:33:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.838 03:33:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.838 03:33:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.838 03:33:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.838 03:33:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.838 03:33:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.838 03:33:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.838 03:33:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.838 03:33:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.838 03:33:05 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:27.838 03:33:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:27.838 03:33:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:27.838 03:33:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:27.838 03:33:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:27.838 03:33:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:27.838 03:33:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:27.838 03:33:05 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 request: 00:19:27.838 { 00:19:27.838 "name": "nvme0", 00:19:27.838 "trtype": "tcp", 00:19:27.838 "traddr": "10.0.0.1", 00:19:27.838 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:27.838 "adrfam": "ipv4", 00:19:27.838 "trsvcid": "4420", 00:19:27.838 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:27.838 "method": "bdev_nvme_attach_controller", 00:19:27.838 "req_id": 1 00:19:27.838 } 00:19:27.838 Got JSON-RPC error response 00:19:27.838 response: 00:19:27.838 { 00:19:27.838 "code": -32602, 00:19:27.838 "message": "Invalid parameters" 00:19:27.838 } 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:27.838 03:33:05 -- common/autotest_common.sh@641 -- # es=1 00:19:27.838 03:33:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:27.838 03:33:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:27.838 03:33:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:27.838 03:33:05 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 03:33:05 -- host/auth.sh@121 -- # jq length 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:19:27.838 03:33:05 -- host/auth.sh@124 -- # get_main_ns_ip 00:19:27.838 03:33:05 -- nvmf/common.sh@717 -- # local ip 00:19:27.838 03:33:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.838 03:33:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.838 03:33:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.838 03:33:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.838 03:33:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.838 03:33:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.838 03:33:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.838 03:33:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.838 03:33:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.838 03:33:05 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:27.838 03:33:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:27.838 03:33:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:27.838 03:33:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:27.838 03:33:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:27.838 03:33:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:27.838 03:33:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:27.838 03:33:05 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 request: 00:19:27.838 { 00:19:27.838 "name": "nvme0", 00:19:27.838 "trtype": "tcp", 00:19:27.838 "traddr": "10.0.0.1", 00:19:27.838 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:27.838 "adrfam": "ipv4", 00:19:27.838 "trsvcid": "4420", 00:19:27.838 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:27.838 "dhchap_key": "key2", 00:19:27.838 "method": "bdev_nvme_attach_controller", 00:19:27.838 "req_id": 1 00:19:27.838 } 00:19:27.838 Got JSON-RPC error response 00:19:27.838 response: 00:19:27.838 { 00:19:27.838 "code": -32602, 00:19:27.838 "message": "Invalid parameters" 00:19:27.838 } 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:27.838 03:33:05 -- common/autotest_common.sh@641 -- # es=1 00:19:27.838 03:33:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:27.838 03:33:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:27.838 03:33:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:27.838 03:33:05 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.838 03:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.838 03:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 03:33:05 -- host/auth.sh@127 -- # jq length 00:19:27.838 03:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.838 03:33:05 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:27.838 03:33:05 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:27.838 03:33:05 -- host/auth.sh@130 -- # cleanup 00:19:27.838 03:33:05 -- host/auth.sh@24 -- # nvmftestfini 00:19:27.838 03:33:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:27.838 03:33:05 -- nvmf/common.sh@117 -- # sync 00:19:27.838 03:33:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.838 03:33:05 -- nvmf/common.sh@120 -- # set +e 00:19:27.838 03:33:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.838 03:33:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.838 rmmod nvme_tcp 00:19:27.838 rmmod nvme_fabrics 00:19:28.097 03:33:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:28.097 03:33:05 -- nvmf/common.sh@124 -- # set -e 00:19:28.097 03:33:05 -- nvmf/common.sh@125 -- # return 0 00:19:28.097 03:33:05 -- nvmf/common.sh@478 -- # '[' -n 309178 ']' 00:19:28.097 03:33:05 -- nvmf/common.sh@479 -- # killprocess 309178 00:19:28.097 03:33:05 -- common/autotest_common.sh@936 -- # '[' -z 309178 ']' 00:19:28.097 03:33:05 -- common/autotest_common.sh@940 -- # kill -0 309178 00:19:28.097 03:33:05 -- common/autotest_common.sh@941 -- # uname 00:19:28.097 03:33:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:28.097 03:33:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 309178 00:19:28.097 03:33:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:28.097 03:33:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:28.097 03:33:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 309178' 00:19:28.097 killing process with pid 309178 00:19:28.097 03:33:05 -- common/autotest_common.sh@955 -- # kill 309178 00:19:28.097 03:33:05 -- common/autotest_common.sh@960 -- # wait 309178 00:19:28.356 03:33:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:28.356 03:33:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:28.356 03:33:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:28.356 03:33:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.356 03:33:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:28.356 03:33:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.356 03:33:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.356 03:33:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.260 03:33:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:30.260 03:33:07 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:30.260 03:33:07 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:30.260 03:33:07 -- host/auth.sh@27 -- # clean_kernel_target 00:19:30.260 03:33:07 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:30.260 03:33:07 -- nvmf/common.sh@675 -- # echo 0 00:19:30.260 03:33:07 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:30.260 03:33:07 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:30.260 03:33:07 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:30.260 03:33:07 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:30.260 03:33:07 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:30.260 03:33:07 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:30.260 03:33:07 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:31.635 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:31.635 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:31.635 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:32.571 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:19:32.571 03:33:10 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DQx /tmp/spdk.key-null.ps2 /tmp/spdk.key-sha256.Sz7 /tmp/spdk.key-sha384.Mlm /tmp/spdk.key-sha512.4fl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:19:32.571 03:33:10 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:33.964 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:33.964 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:19:33.964 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:33.964 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:33.964 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:33.964 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:33.964 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:33.964 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:33.964 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:33.964 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:33.964 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:33.964 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:33.964 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:33.964 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:33.964 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:33.964 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:33.964 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:33.964 00:19:33.964 real 0m46.174s 00:19:33.964 user 0m43.805s 00:19:33.964 sys 0m5.596s 00:19:33.964 03:33:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:33.964 03:33:11 -- common/autotest_common.sh@10 -- # set +x 00:19:33.964 ************************************ 00:19:33.964 END TEST nvmf_auth 00:19:33.964 ************************************ 00:19:33.964 03:33:11 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:19:33.964 03:33:11 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:33.964 03:33:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:33.964 03:33:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.964 03:33:11 -- common/autotest_common.sh@10 -- # set +x 00:19:33.964 ************************************ 00:19:33.964 START TEST nvmf_digest 00:19:33.964 ************************************ 00:19:33.964 03:33:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:33.964 * Looking for test storage... 00:19:33.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:33.964 03:33:11 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.964 03:33:11 -- nvmf/common.sh@7 -- # uname -s 00:19:33.964 03:33:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.964 03:33:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.964 03:33:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.964 03:33:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.964 03:33:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.964 03:33:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.964 03:33:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.964 03:33:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.964 03:33:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.964 03:33:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.964 03:33:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.964 03:33:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.964 03:33:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.964 03:33:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.964 03:33:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.965 03:33:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.965 03:33:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.965 03:33:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.965 03:33:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.965 03:33:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.965 03:33:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.965 03:33:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.965 03:33:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.965 03:33:11 -- paths/export.sh@5 -- # export PATH 00:19:33.965 03:33:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.965 03:33:11 -- nvmf/common.sh@47 -- # : 0 00:19:33.965 03:33:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.965 03:33:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.965 03:33:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.965 03:33:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.965 03:33:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.965 03:33:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.965 03:33:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.965 03:33:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.965 03:33:11 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:33.965 03:33:11 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:33.965 03:33:11 -- host/digest.sh@16 -- # runtime=2 00:19:33.965 03:33:11 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:33.965 03:33:11 -- host/digest.sh@138 -- # nvmftestinit 00:19:33.965 03:33:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:33.965 03:33:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.965 03:33:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:33.965 03:33:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:33.965 03:33:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:33.965 03:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.965 03:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.965 03:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.965 03:33:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:33.965 03:33:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:33.965 03:33:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.965 03:33:11 -- common/autotest_common.sh@10 -- # set +x 00:19:35.870 03:33:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.870 03:33:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.870 03:33:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.870 03:33:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.870 03:33:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.870 03:33:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.870 03:33:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.870 03:33:13 -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.870 03:33:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.870 03:33:13 -- nvmf/common.sh@296 -- # e810=() 00:19:35.870 03:33:13 -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.870 03:33:13 -- nvmf/common.sh@297 -- # x722=() 00:19:35.870 03:33:13 -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.870 03:33:13 -- nvmf/common.sh@298 -- # mlx=() 00:19:35.870 03:33:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.870 03:33:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.870 03:33:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.870 03:33:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.870 03:33:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.870 03:33:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.870 03:33:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.870 03:33:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.870 03:33:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.870 03:33:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.870 03:33:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.870 03:33:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.870 03:33:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.871 03:33:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:35.871 03:33:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.871 03:33:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.871 03:33:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.871 03:33:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.871 03:33:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.871 03:33:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:35.871 03:33:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.871 03:33:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.871 03:33:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.871 03:33:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:35.871 03:33:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:35.871 03:33:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:35.871 03:33:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:35.871 03:33:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:35.871 03:33:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.871 03:33:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.871 03:33:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.871 03:33:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.871 03:33:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.871 03:33:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.871 03:33:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.871 03:33:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.871 03:33:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.871 03:33:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.871 03:33:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.871 03:33:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.871 03:33:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.871 03:33:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.130 03:33:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.130 03:33:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.130 03:33:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.130 03:33:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.130 03:33:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.130 03:33:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:19:36.130 00:19:36.130 --- 10.0.0.2 ping statistics --- 00:19:36.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.130 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:19:36.130 03:33:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:19:36.130 00:19:36.130 --- 10.0.0.1 ping statistics --- 00:19:36.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.130 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:36.130 03:33:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.130 03:33:13 -- nvmf/common.sh@411 -- # return 0 00:19:36.130 03:33:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:36.130 03:33:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.130 03:33:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:36.130 03:33:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:36.130 03:33:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.130 03:33:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:36.130 03:33:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:36.130 03:33:13 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:36.130 03:33:13 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:36.130 03:33:13 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:36.130 03:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:36.130 03:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.130 03:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:36.130 ************************************ 00:19:36.130 START TEST nvmf_digest_clean 00:19:36.130 ************************************ 00:19:36.130 03:33:13 -- common/autotest_common.sh@1111 -- # run_digest 00:19:36.130 03:33:13 -- host/digest.sh@120 -- # local dsa_initiator 00:19:36.130 03:33:13 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:36.130 03:33:13 -- host/digest.sh@121 -- # dsa_initiator=false 00:19:36.130 03:33:13 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:36.130 03:33:13 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:36.130 03:33:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:36.130 03:33:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:36.130 03:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:36.130 03:33:13 -- nvmf/common.sh@470 -- # nvmfpid=318832 00:19:36.130 03:33:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:36.130 03:33:13 -- nvmf/common.sh@471 -- # waitforlisten 318832 00:19:36.130 03:33:13 -- common/autotest_common.sh@817 -- # '[' -z 318832 ']' 00:19:36.130 03:33:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.130 03:33:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:36.130 03:33:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.130 03:33:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:36.130 03:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:36.130 [2024-04-19 03:33:13.676205] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:36.130 [2024-04-19 03:33:13.676283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.389 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.390 [2024-04-19 03:33:13.741784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.390 [2024-04-19 03:33:13.848344] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.390 [2024-04-19 03:33:13.848428] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.390 [2024-04-19 03:33:13.848442] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.390 [2024-04-19 03:33:13.848453] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.390 [2024-04-19 03:33:13.848462] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.390 [2024-04-19 03:33:13.848490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.390 03:33:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.390 03:33:13 -- common/autotest_common.sh@850 -- # return 0 00:19:36.390 03:33:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:36.390 03:33:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.390 03:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:36.390 03:33:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.390 03:33:13 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:36.390 03:33:13 -- host/digest.sh@126 -- # common_target_config 00:19:36.390 03:33:13 -- host/digest.sh@43 -- # rpc_cmd 00:19:36.390 03:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.390 03:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:36.649 null0 00:19:36.649 [2024-04-19 03:33:14.027261] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.649 [2024-04-19 03:33:14.051511] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.649 03:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.649 03:33:14 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:36.649 03:33:14 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:36.649 03:33:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:36.649 03:33:14 -- host/digest.sh@80 -- # rw=randread 00:19:36.649 03:33:14 -- host/digest.sh@80 -- # bs=4096 00:19:36.649 03:33:14 -- host/digest.sh@80 -- # qd=128 00:19:36.649 03:33:14 -- host/digest.sh@80 -- # scan_dsa=false 00:19:36.649 03:33:14 -- host/digest.sh@83 -- # bperfpid=318862 00:19:36.649 03:33:14 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:36.649 03:33:14 -- host/digest.sh@84 -- # waitforlisten 318862 /var/tmp/bperf.sock 00:19:36.649 03:33:14 -- common/autotest_common.sh@817 -- # '[' -z 318862 ']' 00:19:36.649 03:33:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:36.649 03:33:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:36.649 03:33:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:36.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:36.649 03:33:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:36.649 03:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:36.649 [2024-04-19 03:33:14.101199] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:36.649 [2024-04-19 03:33:14.101274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318862 ] 00:19:36.649 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.649 [2024-04-19 03:33:14.167909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.908 [2024-04-19 03:33:14.288052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.908 03:33:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.908 03:33:14 -- common/autotest_common.sh@850 -- # return 0 00:19:36.908 03:33:14 -- host/digest.sh@86 -- # false 00:19:36.908 03:33:14 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:36.908 03:33:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:37.167 03:33:14 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:37.167 03:33:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:37.733 nvme0n1 00:19:37.733 03:33:15 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:37.733 03:33:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:37.733 Running I/O for 2 seconds... 00:19:39.719 00:19:39.719 Latency(us) 00:19:39.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.719 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:39.719 nvme0n1 : 2.00 18562.94 72.51 0.00 0.00 6887.29 2815.62 13689.74 00:19:39.719 =================================================================================================================== 00:19:39.719 Total : 18562.94 72.51 0.00 0.00 6887.29 2815.62 13689.74 00:19:39.719 0 00:19:39.719 03:33:17 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:39.719 03:33:17 -- host/digest.sh@93 -- # get_accel_stats 00:19:39.719 03:33:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:39.719 03:33:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:39.719 | select(.opcode=="crc32c") 00:19:39.719 | "\(.module_name) \(.executed)"' 00:19:39.719 03:33:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:39.978 03:33:17 -- host/digest.sh@94 -- # false 00:19:39.978 03:33:17 -- host/digest.sh@94 -- # exp_module=software 00:19:39.978 03:33:17 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:39.978 03:33:17 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:39.978 03:33:17 -- host/digest.sh@98 -- # killprocess 318862 00:19:39.978 03:33:17 -- common/autotest_common.sh@936 -- # '[' -z 318862 ']' 00:19:39.978 03:33:17 -- common/autotest_common.sh@940 -- # kill -0 318862 00:19:39.978 03:33:17 -- common/autotest_common.sh@941 -- # uname 00:19:39.978 03:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:39.978 03:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 318862 00:19:39.978 03:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:39.978 03:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:39.978 03:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 318862' 00:19:39.978 killing process with pid 318862 00:19:39.978 03:33:17 -- common/autotest_common.sh@955 -- # kill 318862 00:19:39.978 Received shutdown signal, test time was about 2.000000 seconds 00:19:39.978 00:19:39.978 Latency(us) 00:19:39.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.978 =================================================================================================================== 00:19:39.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.978 03:33:17 -- common/autotest_common.sh@960 -- # wait 318862 00:19:40.236 03:33:17 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:40.236 03:33:17 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:40.236 03:33:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:40.236 03:33:17 -- host/digest.sh@80 -- # rw=randread 00:19:40.237 03:33:17 -- host/digest.sh@80 -- # bs=131072 00:19:40.237 03:33:17 -- host/digest.sh@80 -- # qd=16 00:19:40.237 03:33:17 -- host/digest.sh@80 -- # scan_dsa=false 00:19:40.237 03:33:17 -- host/digest.sh@83 -- # bperfpid=319266 00:19:40.237 03:33:17 -- host/digest.sh@84 -- # waitforlisten 319266 /var/tmp/bperf.sock 00:19:40.237 03:33:17 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:40.237 03:33:17 -- common/autotest_common.sh@817 -- # '[' -z 319266 ']' 00:19:40.237 03:33:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:40.237 03:33:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.237 03:33:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:40.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:40.237 03:33:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.237 03:33:17 -- common/autotest_common.sh@10 -- # set +x 00:19:40.237 [2024-04-19 03:33:17.743606] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:40.237 [2024-04-19 03:33:17.743700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319266 ] 00:19:40.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:40.237 Zero copy mechanism will not be used. 00:19:40.237 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.496 [2024-04-19 03:33:17.807577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.496 [2024-04-19 03:33:17.927293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.496 03:33:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:40.496 03:33:17 -- common/autotest_common.sh@850 -- # return 0 00:19:40.496 03:33:17 -- host/digest.sh@86 -- # false 00:19:40.496 03:33:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:40.496 03:33:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:40.755 03:33:18 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.755 03:33:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:41.321 nvme0n1 00:19:41.321 03:33:18 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:41.321 03:33:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:41.321 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:41.321 Zero copy mechanism will not be used. 00:19:41.321 Running I/O for 2 seconds... 00:19:43.852 00:19:43.852 Latency(us) 00:19:43.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.852 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:43.852 nvme0n1 : 2.00 3094.22 386.78 0.00 0.00 5167.12 4951.61 6699.24 00:19:43.852 =================================================================================================================== 00:19:43.852 Total : 3094.22 386.78 0.00 0.00 5167.12 4951.61 6699.24 00:19:43.852 0 00:19:43.852 03:33:20 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:43.852 03:33:20 -- host/digest.sh@93 -- # get_accel_stats 00:19:43.852 03:33:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:43.852 03:33:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:43.852 | select(.opcode=="crc32c") 00:19:43.852 | "\(.module_name) \(.executed)"' 00:19:43.852 03:33:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:43.852 03:33:21 -- host/digest.sh@94 -- # false 00:19:43.852 03:33:21 -- host/digest.sh@94 -- # exp_module=software 00:19:43.852 03:33:21 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:43.852 03:33:21 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:43.852 03:33:21 -- host/digest.sh@98 -- # killprocess 319266 00:19:43.852 03:33:21 -- common/autotest_common.sh@936 -- # '[' -z 319266 ']' 00:19:43.852 03:33:21 -- common/autotest_common.sh@940 -- # kill -0 319266 00:19:43.852 03:33:21 -- common/autotest_common.sh@941 -- # uname 00:19:43.852 03:33:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.852 03:33:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 319266 00:19:43.852 03:33:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:43.852 03:33:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:43.852 03:33:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 319266' 00:19:43.852 killing process with pid 319266 00:19:43.852 03:33:21 -- common/autotest_common.sh@955 -- # kill 319266 00:19:43.852 Received shutdown signal, test time was about 2.000000 seconds 00:19:43.852 00:19:43.852 Latency(us) 00:19:43.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.852 =================================================================================================================== 00:19:43.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.852 03:33:21 -- common/autotest_common.sh@960 -- # wait 319266 00:19:44.110 03:33:21 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:44.110 03:33:21 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:44.110 03:33:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:44.110 03:33:21 -- host/digest.sh@80 -- # rw=randwrite 00:19:44.110 03:33:21 -- host/digest.sh@80 -- # bs=4096 00:19:44.110 03:33:21 -- host/digest.sh@80 -- # qd=128 00:19:44.110 03:33:21 -- host/digest.sh@80 -- # scan_dsa=false 00:19:44.110 03:33:21 -- host/digest.sh@83 -- # bperfpid=319791 00:19:44.110 03:33:21 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:44.110 03:33:21 -- host/digest.sh@84 -- # waitforlisten 319791 /var/tmp/bperf.sock 00:19:44.110 03:33:21 -- common/autotest_common.sh@817 -- # '[' -z 319791 ']' 00:19:44.110 03:33:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:44.110 03:33:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:44.110 03:33:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:44.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:44.110 03:33:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:44.110 03:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:44.110 [2024-04-19 03:33:21.477055] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:44.110 [2024-04-19 03:33:21.477151] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319791 ] 00:19:44.110 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.110 [2024-04-19 03:33:21.539808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.110 [2024-04-19 03:33:21.658990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.042 03:33:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:45.042 03:33:22 -- common/autotest_common.sh@850 -- # return 0 00:19:45.042 03:33:22 -- host/digest.sh@86 -- # false 00:19:45.042 03:33:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:45.042 03:33:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:45.300 03:33:22 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:45.300 03:33:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:45.865 nvme0n1 00:19:45.865 03:33:23 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:45.865 03:33:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:45.865 Running I/O for 2 seconds... 00:19:47.807 00:19:47.807 Latency(us) 00:19:47.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.807 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:47.807 nvme0n1 : 2.00 20007.68 78.16 0.00 0.00 6387.84 2815.62 17185.00 00:19:47.807 =================================================================================================================== 00:19:47.807 Total : 20007.68 78.16 0.00 0.00 6387.84 2815.62 17185.00 00:19:47.807 0 00:19:47.807 03:33:25 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:47.807 03:33:25 -- host/digest.sh@93 -- # get_accel_stats 00:19:47.807 03:33:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:47.807 03:33:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:47.807 03:33:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:47.807 | select(.opcode=="crc32c") 00:19:47.807 | "\(.module_name) \(.executed)"' 00:19:48.065 03:33:25 -- host/digest.sh@94 -- # false 00:19:48.065 03:33:25 -- host/digest.sh@94 -- # exp_module=software 00:19:48.065 03:33:25 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:48.065 03:33:25 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:48.065 03:33:25 -- host/digest.sh@98 -- # killprocess 319791 00:19:48.065 03:33:25 -- common/autotest_common.sh@936 -- # '[' -z 319791 ']' 00:19:48.065 03:33:25 -- common/autotest_common.sh@940 -- # kill -0 319791 00:19:48.065 03:33:25 -- common/autotest_common.sh@941 -- # uname 00:19:48.065 03:33:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:48.065 03:33:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 319791 00:19:48.065 03:33:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:48.065 03:33:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:48.065 03:33:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 319791' 00:19:48.065 killing process with pid 319791 00:19:48.065 03:33:25 -- common/autotest_common.sh@955 -- # kill 319791 00:19:48.065 Received shutdown signal, test time was about 2.000000 seconds 00:19:48.065 00:19:48.065 Latency(us) 00:19:48.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.065 =================================================================================================================== 00:19:48.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.065 03:33:25 -- common/autotest_common.sh@960 -- # wait 319791 00:19:48.324 03:33:25 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:48.324 03:33:25 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:48.324 03:33:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:48.324 03:33:25 -- host/digest.sh@80 -- # rw=randwrite 00:19:48.324 03:33:25 -- host/digest.sh@80 -- # bs=131072 00:19:48.324 03:33:25 -- host/digest.sh@80 -- # qd=16 00:19:48.324 03:33:25 -- host/digest.sh@80 -- # scan_dsa=false 00:19:48.324 03:33:25 -- host/digest.sh@83 -- # bperfpid=320304 00:19:48.324 03:33:25 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:48.324 03:33:25 -- host/digest.sh@84 -- # waitforlisten 320304 /var/tmp/bperf.sock 00:19:48.324 03:33:25 -- common/autotest_common.sh@817 -- # '[' -z 320304 ']' 00:19:48.324 03:33:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:48.324 03:33:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.324 03:33:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:48.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:48.324 03:33:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.324 03:33:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.324 [2024-04-19 03:33:25.848245] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:48.324 [2024-04-19 03:33:25.848324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320304 ] 00:19:48.324 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:48.324 Zero copy mechanism will not be used. 00:19:48.324 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.582 [2024-04-19 03:33:25.909259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.582 [2024-04-19 03:33:26.017788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.582 03:33:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:48.582 03:33:26 -- common/autotest_common.sh@850 -- # return 0 00:19:48.582 03:33:26 -- host/digest.sh@86 -- # false 00:19:48.582 03:33:26 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:48.582 03:33:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:49.149 03:33:26 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:49.149 03:33:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:49.407 nvme0n1 00:19:49.407 03:33:26 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:49.407 03:33:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:49.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:49.407 Zero copy mechanism will not be used. 00:19:49.407 Running I/O for 2 seconds... 00:19:51.939 00:19:51.939 Latency(us) 00:19:51.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.939 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:51.939 nvme0n1 : 2.01 2332.49 291.56 0.00 0.00 6842.91 5170.06 17185.00 00:19:51.939 =================================================================================================================== 00:19:51.939 Total : 2332.49 291.56 0.00 0.00 6842.91 5170.06 17185.00 00:19:51.939 0 00:19:51.939 03:33:28 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:51.939 03:33:28 -- host/digest.sh@93 -- # get_accel_stats 00:19:51.939 03:33:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:51.939 03:33:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:51.939 03:33:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:51.939 | select(.opcode=="crc32c") 00:19:51.939 | "\(.module_name) \(.executed)"' 00:19:51.939 03:33:29 -- host/digest.sh@94 -- # false 00:19:51.939 03:33:29 -- host/digest.sh@94 -- # exp_module=software 00:19:51.939 03:33:29 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:51.939 03:33:29 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:51.939 03:33:29 -- host/digest.sh@98 -- # killprocess 320304 00:19:51.939 03:33:29 -- common/autotest_common.sh@936 -- # '[' -z 320304 ']' 00:19:51.939 03:33:29 -- common/autotest_common.sh@940 -- # kill -0 320304 00:19:51.939 03:33:29 -- common/autotest_common.sh@941 -- # uname 00:19:51.939 03:33:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.939 03:33:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 320304 00:19:51.939 03:33:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:51.939 03:33:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:51.939 03:33:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 320304' 00:19:51.939 killing process with pid 320304 00:19:51.939 03:33:29 -- common/autotest_common.sh@955 -- # kill 320304 00:19:51.939 Received shutdown signal, test time was about 2.000000 seconds 00:19:51.939 00:19:51.939 Latency(us) 00:19:51.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.939 =================================================================================================================== 00:19:51.939 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.939 03:33:29 -- common/autotest_common.sh@960 -- # wait 320304 00:19:52.198 03:33:29 -- host/digest.sh@132 -- # killprocess 318832 00:19:52.198 03:33:29 -- common/autotest_common.sh@936 -- # '[' -z 318832 ']' 00:19:52.198 03:33:29 -- common/autotest_common.sh@940 -- # kill -0 318832 00:19:52.198 03:33:29 -- common/autotest_common.sh@941 -- # uname 00:19:52.198 03:33:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.198 03:33:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 318832 00:19:52.198 03:33:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.198 03:33:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.198 03:33:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 318832' 00:19:52.198 killing process with pid 318832 00:19:52.198 03:33:29 -- common/autotest_common.sh@955 -- # kill 318832 00:19:52.198 03:33:29 -- common/autotest_common.sh@960 -- # wait 318832 00:19:52.457 00:19:52.457 real 0m16.201s 00:19:52.457 user 0m32.681s 00:19:52.457 sys 0m3.880s 00:19:52.457 03:33:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.457 03:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.457 ************************************ 00:19:52.457 END TEST nvmf_digest_clean 00:19:52.457 ************************************ 00:19:52.457 03:33:29 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:52.458 03:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:52.458 03:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.458 03:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.458 ************************************ 00:19:52.458 START TEST nvmf_digest_error 00:19:52.458 ************************************ 00:19:52.458 03:33:29 -- common/autotest_common.sh@1111 -- # run_digest_error 00:19:52.458 03:33:29 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:52.458 03:33:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.458 03:33:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.458 03:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.458 03:33:29 -- nvmf/common.sh@470 -- # nvmfpid=320779 00:19:52.458 03:33:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:52.458 03:33:29 -- nvmf/common.sh@471 -- # waitforlisten 320779 00:19:52.458 03:33:29 -- common/autotest_common.sh@817 -- # '[' -z 320779 ']' 00:19:52.458 03:33:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.458 03:33:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.458 03:33:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.458 03:33:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.458 03:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.458 [2024-04-19 03:33:29.999568] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:52.458 [2024-04-19 03:33:29.999652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.717 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.717 [2024-04-19 03:33:30.070722] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.717 [2024-04-19 03:33:30.179581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.717 [2024-04-19 03:33:30.179648] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.717 [2024-04-19 03:33:30.179677] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.717 [2024-04-19 03:33:30.179689] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.717 [2024-04-19 03:33:30.179699] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.717 [2024-04-19 03:33:30.179728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.717 03:33:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.717 03:33:30 -- common/autotest_common.sh@850 -- # return 0 00:19:52.717 03:33:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:52.717 03:33:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:52.718 03:33:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.718 03:33:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.718 03:33:30 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:52.718 03:33:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.718 03:33:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.718 [2024-04-19 03:33:30.236302] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:52.718 03:33:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.718 03:33:30 -- host/digest.sh@105 -- # common_target_config 00:19:52.718 03:33:30 -- host/digest.sh@43 -- # rpc_cmd 00:19:52.718 03:33:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.718 03:33:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.977 null0 00:19:52.977 [2024-04-19 03:33:30.357024] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.977 [2024-04-19 03:33:30.381269] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.977 03:33:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.977 03:33:30 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:52.977 03:33:30 -- host/digest.sh@54 -- # local rw bs qd 00:19:52.977 03:33:30 -- host/digest.sh@56 -- # rw=randread 00:19:52.977 03:33:30 -- host/digest.sh@56 -- # bs=4096 00:19:52.977 03:33:30 -- host/digest.sh@56 -- # qd=128 00:19:52.977 03:33:30 -- host/digest.sh@58 -- # bperfpid=320914 00:19:52.977 03:33:30 -- host/digest.sh@60 -- # waitforlisten 320914 /var/tmp/bperf.sock 00:19:52.977 03:33:30 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:52.977 03:33:30 -- common/autotest_common.sh@817 -- # '[' -z 320914 ']' 00:19:52.977 03:33:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:52.977 03:33:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.977 03:33:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:52.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:52.977 03:33:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.977 03:33:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.977 [2024-04-19 03:33:30.430345] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:52.977 [2024-04-19 03:33:30.430463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320914 ] 00:19:52.977 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.977 [2024-04-19 03:33:30.495446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.236 [2024-04-19 03:33:30.611127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.803 03:33:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.803 03:33:31 -- common/autotest_common.sh@850 -- # return 0 00:19:53.803 03:33:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:53.803 03:33:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:54.060 03:33:31 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:54.060 03:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.060 03:33:31 -- common/autotest_common.sh@10 -- # set +x 00:19:54.060 03:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.060 03:33:31 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.060 03:33:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.627 nvme0n1 00:19:54.627 03:33:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:54.627 03:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.627 03:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:54.627 03:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.627 03:33:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:54.627 03:33:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:54.627 Running I/O for 2 seconds... 00:19:54.886 [2024-04-19 03:33:32.187936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.187985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.188017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.212026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.212064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.212090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.234872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.234908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.234932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.256577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.256607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.256630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.272972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.273002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.273026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.295442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.295472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.295497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.317867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.317903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.317926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.334079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.334115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.334140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.356987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.357025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.357044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.378928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.378964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.378983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.402090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.402127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.402147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.424171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.424207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.424231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.886 [2024-04-19 03:33:32.440756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:54.886 [2024-04-19 03:33:32.440792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.886 [2024-04-19 03:33:32.440811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.464182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.464218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.464238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.487965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.488002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.488022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.509454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.509484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.509502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.525049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.525085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.525112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.547895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.547936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.547956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.571475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.571507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.571524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.595210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.595248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.595268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.617709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.617745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.617765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.633650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.633707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.633727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.655293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.655329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.678688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.678718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.678756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.145 [2024-04-19 03:33:32.700636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.145 [2024-04-19 03:33:32.700683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.145 [2024-04-19 03:33:32.700703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.723489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.723524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.723542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.746379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.746436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.746454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.771646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.771675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.771692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.786216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.786251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.786271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.808646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.808694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.808714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.831497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.831543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.855478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.855508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.855526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.878378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.878592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.878611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.901882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.901918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.901938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.925174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.925210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.925230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.403 [2024-04-19 03:33:32.939574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.403 [2024-04-19 03:33:32.939604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.403 [2024-04-19 03:33:32.939620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.660 [2024-04-19 03:33:32.961367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:32.961426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:32.961445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:32.984485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:32.984514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:32.984530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.008546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.008577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.031624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.031745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.031766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.052651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.052701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.076756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.076792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.076812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.097511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.097542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.097565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.114254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.114291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.114310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.137185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.137222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.137241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.160022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.160058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.160079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.183340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.183377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.183406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.661 [2024-04-19 03:33:33.206249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.661 [2024-04-19 03:33:33.206285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.661 [2024-04-19 03:33:33.206305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.229558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.229589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.229605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.252769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.252806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.252826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.273694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.273745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.273765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.289753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.289823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.313732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.313770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.313789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.336960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.336998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.337018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.359677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.359722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.359744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.382373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.382432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.382450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.404559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.404605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.404622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.420057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.420093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.420114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.441905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.441942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.441961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.919 [2024-04-19 03:33:33.465073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:55.919 [2024-04-19 03:33:33.465110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.919 [2024-04-19 03:33:33.465130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.487999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.488035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.488055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.511072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.511108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.511127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.534253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.534290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.534310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.557332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.557369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.557397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.571742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.571778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.571797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.593533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.593564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.593580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.616753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.616810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.641603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.641633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.641657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.664833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.664877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.664899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.679940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.679976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.679996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.702721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.702757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.702776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.177 [2024-04-19 03:33:33.724776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.177 [2024-04-19 03:33:33.724813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.177 [2024-04-19 03:33:33.724833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.748105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.748142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.748161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.771274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.771310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.771330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.794487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.794518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.794534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.818002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.818040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.840604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.840637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.840654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.861590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.861622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.861638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.877293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.877327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.877346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.899274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.899305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.899321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.921070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.921100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.921116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.935987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.936017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.936033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.955077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.955110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.955186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.436 [2024-04-19 03:33:33.976399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.436 [2024-04-19 03:33:33.976430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.436 [2024-04-19 03:33:33.976447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:33.996313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:33.996345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:33.996377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.010769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.010799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.010823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.031450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.031481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.031497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.052929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.052961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.052978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.074406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.074470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.095069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.095100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.095116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.115616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.115647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.115664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.136322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.136352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.136388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 [2024-04-19 03:33:34.151286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5dfaa0) 00:19:56.695 [2024-04-19 03:33:34.151333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.695 [2024-04-19 03:33:34.151350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.695 00:19:56.695 Latency(us) 00:19:56.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.695 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:56.695 nvme0n1 : 2.00 11918.05 46.55 0.00 0.00 10730.47 4150.61 34369.99 00:19:56.695 =================================================================================================================== 00:19:56.695 Total : 11918.05 46.55 0.00 0.00 10730.47 4150.61 34369.99 00:19:56.695 0 00:19:56.695 03:33:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:56.695 03:33:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:56.695 03:33:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:56.695 03:33:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:56.695 | .driver_specific 00:19:56.695 | .nvme_error 00:19:56.695 | .status_code 00:19:56.695 | .command_transient_transport_error' 00:19:56.953 03:33:34 -- host/digest.sh@71 -- # (( 93 > 0 )) 00:19:56.953 03:33:34 -- host/digest.sh@73 -- # killprocess 320914 00:19:56.953 03:33:34 -- common/autotest_common.sh@936 -- # '[' -z 320914 ']' 00:19:56.953 03:33:34 -- common/autotest_common.sh@940 -- # kill -0 320914 00:19:56.953 03:33:34 -- common/autotest_common.sh@941 -- # uname 00:19:56.953 03:33:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.953 03:33:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 320914 00:19:56.953 03:33:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:56.953 03:33:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:56.953 03:33:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 320914' 00:19:56.953 killing process with pid 320914 00:19:56.953 03:33:34 -- common/autotest_common.sh@955 -- # kill 320914 00:19:56.953 Received shutdown signal, test time was about 2.000000 seconds 00:19:56.953 00:19:56.953 Latency(us) 00:19:56.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.953 =================================================================================================================== 00:19:56.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.953 03:33:34 -- common/autotest_common.sh@960 -- # wait 320914 00:19:57.211 03:33:34 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:57.211 03:33:34 -- host/digest.sh@54 -- # local rw bs qd 00:19:57.211 03:33:34 -- host/digest.sh@56 -- # rw=randread 00:19:57.211 03:33:34 -- host/digest.sh@56 -- # bs=131072 00:19:57.211 03:33:34 -- host/digest.sh@56 -- # qd=16 00:19:57.211 03:33:34 -- host/digest.sh@58 -- # bperfpid=321345 00:19:57.211 03:33:34 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:57.211 03:33:34 -- host/digest.sh@60 -- # waitforlisten 321345 /var/tmp/bperf.sock 00:19:57.211 03:33:34 -- common/autotest_common.sh@817 -- # '[' -z 321345 ']' 00:19:57.211 03:33:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:57.211 03:33:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:57.211 03:33:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:57.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:57.211 03:33:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:57.211 03:33:34 -- common/autotest_common.sh@10 -- # set +x 00:19:57.211 [2024-04-19 03:33:34.757350] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:19:57.211 [2024-04-19 03:33:34.757446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321345 ] 00:19:57.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:57.211 Zero copy mechanism will not be used. 00:19:57.469 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.469 [2024-04-19 03:33:34.823523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.469 [2024-04-19 03:33:34.940694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.727 03:33:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:57.727 03:33:35 -- common/autotest_common.sh@850 -- # return 0 00:19:57.727 03:33:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:57.727 03:33:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:57.985 03:33:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:57.985 03:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.985 03:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:57.985 03:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.985 03:33:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:57.985 03:33:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:58.243 nvme0n1 00:19:58.243 03:33:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:58.243 03:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.243 03:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:58.243 03:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.243 03:33:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:58.243 03:33:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:58.243 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:58.243 Zero copy mechanism will not be used. 00:19:58.243 Running I/O for 2 seconds... 00:19:58.243 [2024-04-19 03:33:35.779969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.243 [2024-04-19 03:33:35.780017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.243 [2024-04-19 03:33:35.780041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.243 [2024-04-19 03:33:35.789648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.243 [2024-04-19 03:33:35.789699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.243 [2024-04-19 03:33:35.789719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.243 [2024-04-19 03:33:35.799562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.243 [2024-04-19 03:33:35.799594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.243 [2024-04-19 03:33:35.799618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.809752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.809782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.809804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.819539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.819570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.819593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.829199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.829234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.829259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.839147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.839182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.839204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.849194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.849228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.849248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.859047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.859083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.859103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.869223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.869257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.869276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.879086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.879121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.879152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.889147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.889183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.889207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.898949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.898984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.899006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.909006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.909041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.909060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.918992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.919026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.919054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.928895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.928929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.928954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.938704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.938739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.938768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.948685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.948715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.948742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.958704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.958739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.958765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.968578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.968608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.968626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.978521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.978550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.978567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.988411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.988460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.988477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:35.998411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:35.998460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:35.998477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:36.008204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:36.008239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:36.008259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:36.018208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:36.018242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:36.018262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:36.028397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:36.028431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:36.028468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:36.038282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.501 [2024-04-19 03:33:36.038316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.501 [2024-04-19 03:33:36.038336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.501 [2024-04-19 03:33:36.048634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.502 [2024-04-19 03:33:36.048689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.502 [2024-04-19 03:33:36.048709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.502 [2024-04-19 03:33:36.058764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.502 [2024-04-19 03:33:36.058800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.502 [2024-04-19 03:33:36.058819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.759 [2024-04-19 03:33:36.069042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.759 [2024-04-19 03:33:36.069078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.759 [2024-04-19 03:33:36.069098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.759 [2024-04-19 03:33:36.078948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.759 [2024-04-19 03:33:36.078983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.759 [2024-04-19 03:33:36.079003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.759 [2024-04-19 03:33:36.088790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.759 [2024-04-19 03:33:36.088824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.759 [2024-04-19 03:33:36.088849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.759 [2024-04-19 03:33:36.098758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.759 [2024-04-19 03:33:36.098793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.759 [2024-04-19 03:33:36.098813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.759 [2024-04-19 03:33:36.108465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.759 [2024-04-19 03:33:36.108495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.759 [2024-04-19 03:33:36.108515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.759 [2024-04-19 03:33:36.118603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.118638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.118657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.128533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.128563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.128581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.138427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.138473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.138490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.148293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.148328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.148347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.158327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.158361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.158390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.168254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.168284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.178078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.178119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.178139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.187860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.187894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.187914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.197814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.197850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.197880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.208265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.208301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.208320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.218792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.218827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.218846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.228974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.229009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.229029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.239017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.239051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.239076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.249160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.249194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.249214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.259197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.259233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.259253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.269284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.269319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.269349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.279281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.279311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.279329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.289134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.289164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.289181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.299463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.299492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.299510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.760 [2024-04-19 03:33:36.309582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:58.760 [2024-04-19 03:33:36.309613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-04-19 03:33:36.309632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.319601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.319647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.319664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.329798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.329833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.329852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.339737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.339767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.339783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.349675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.349724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.349753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.359801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.359835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.359854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.369808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.369843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.369863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.379887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.379921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.390044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.390080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.400209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.400240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.400257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.410245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.410280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.410300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.420262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.420297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.420317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.430461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.430493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.430510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.440366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.440411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.440433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.450320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.450354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.450374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.460422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.460469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.460486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.470455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.470491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.470511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.480528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.480560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.480591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.490513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.490543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.490560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.500524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.500554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.500571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.510342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.510378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.510408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.520627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.520658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.520696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.530749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.530785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.530805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.540800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.019 [2024-04-19 03:33:36.540836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.019 [2024-04-19 03:33:36.540856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.019 [2024-04-19 03:33:36.551186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.020 [2024-04-19 03:33:36.551220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.020 [2024-04-19 03:33:36.551240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.020 [2024-04-19 03:33:36.561298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.020 [2024-04-19 03:33:36.561332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.020 [2024-04-19 03:33:36.561352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.020 [2024-04-19 03:33:36.571303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.020 [2024-04-19 03:33:36.571338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.020 [2024-04-19 03:33:36.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.581278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.581313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.581332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.591485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.591517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.591534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.601633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.601664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.601681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.611578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.611615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.611633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.621606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.621636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.621654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.631606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.641641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.641690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.641710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.651616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.651646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.651663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.661222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.661257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.661276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.671210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.671245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.671264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.681328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.681363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.681391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.691250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.691285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.691305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.701357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.701401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.701435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.711267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.711303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.711323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.721157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.721193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.721212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.731129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.731165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.731184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.741586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.741616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.741632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.751809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.751844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.751863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.761899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.761933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.761953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.771833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.771868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.278 [2024-04-19 03:33:36.781865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.278 [2024-04-19 03:33:36.781900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.278 [2024-04-19 03:33:36.781925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.279 [2024-04-19 03:33:36.791924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.279 [2024-04-19 03:33:36.791961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.279 [2024-04-19 03:33:36.791980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.279 [2024-04-19 03:33:36.802405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.279 [2024-04-19 03:33:36.802461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.279 [2024-04-19 03:33:36.802477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.279 [2024-04-19 03:33:36.812336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.279 [2024-04-19 03:33:36.812371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.279 [2024-04-19 03:33:36.812399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.279 [2024-04-19 03:33:36.822326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.279 [2024-04-19 03:33:36.822361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.279 [2024-04-19 03:33:36.822390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.279 [2024-04-19 03:33:36.832317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.279 [2024-04-19 03:33:36.832349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.279 [2024-04-19 03:33:36.832368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.842284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.842320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.842340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.852087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.852117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.852133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.861749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.861786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.861806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.871573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.871606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.871633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.881343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.881379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.881418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.890982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.891013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.891029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.900550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.900582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.900600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.910064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.910099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.910119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.919646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.919676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.919710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.929336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.929370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.929398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.938901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.938932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.938949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.948655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.948700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.948726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.538 [2024-04-19 03:33:36.958282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.538 [2024-04-19 03:33:36.958317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.538 [2024-04-19 03:33:36.958337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:36.968032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:36.968077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:36.968093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:36.977731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:36.977767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:36.977786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:36.987576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:36.987606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:36.987623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:36.997045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:36.997076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:36.997093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.006571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.006602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.006619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.016365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.016410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.016431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.026177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.026208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.026224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.035745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.035795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.035826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.045530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.045561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.045578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.055721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.055751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.055767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.065239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.065269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.065285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.074802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.074833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.074849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.084636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.084666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.084684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.539 [2024-04-19 03:33:37.094698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.539 [2024-04-19 03:33:37.094732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.539 [2024-04-19 03:33:37.094751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.797 [2024-04-19 03:33:37.104713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.104747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.104768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.114293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.114327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.114345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.123843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.123873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.123889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.133841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.133877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.133896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.143645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.143674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.143691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.153516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.153553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.153570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.163413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.163461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.163478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.173542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.173572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.173588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.183316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.183347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.192773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.192802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.192819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.202417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.202471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.202493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.212543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.212576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.212594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.222516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.222555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.222572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.232310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.232345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.232364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.241799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.241834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.241854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.251588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.251618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.251634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.261405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.261454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.261472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.271087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.271134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.271150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.280649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.798 [2024-04-19 03:33:37.280681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.798 [2024-04-19 03:33:37.280714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.798 [2024-04-19 03:33:37.290402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.290437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.290469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.799 [2024-04-19 03:33:37.300324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.300354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.300371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.799 [2024-04-19 03:33:37.310346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.310392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.310415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.799 [2024-04-19 03:33:37.320415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.320464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.320481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.799 [2024-04-19 03:33:37.330371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.330414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.330435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.799 [2024-04-19 03:33:37.339837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.339867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.339884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.799 [2024-04-19 03:33:37.349460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:19:59.799 [2024-04-19 03:33:37.349491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.799 [2024-04-19 03:33:37.349508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.057 [2024-04-19 03:33:37.359126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.057 [2024-04-19 03:33:37.359160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.057 [2024-04-19 03:33:37.359179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.057 [2024-04-19 03:33:37.369307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.057 [2024-04-19 03:33:37.369342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.057 [2024-04-19 03:33:37.369368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.057 [2024-04-19 03:33:37.379349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.057 [2024-04-19 03:33:37.379391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.057 [2024-04-19 03:33:37.379413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.057 [2024-04-19 03:33:37.389284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.057 [2024-04-19 03:33:37.389319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.057 [2024-04-19 03:33:37.389338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.057 [2024-04-19 03:33:37.399212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.057 [2024-04-19 03:33:37.399247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.057 [2024-04-19 03:33:37.399267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.409274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.409309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.409328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.419076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.419111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.419131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.429448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.429478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.429494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.439318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.439353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.439373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.449413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.449462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.449479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.459441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.459477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.459495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.469785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.469821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.469841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.480263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.480297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.480317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.490743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.490779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.490799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.500824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.500859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.500878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.510816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.510851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.510871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.520763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.520791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.520808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.530634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.530664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.530694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.540694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.540729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.540749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.550485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.550514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.550530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.560727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.560776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.560795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.570770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.570806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.570825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.580953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.580988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.581008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.591068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.591104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.591123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.601265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.601300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.601319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.058 [2024-04-19 03:33:37.611166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.058 [2024-04-19 03:33:37.611201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.058 [2024-04-19 03:33:37.611221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.621452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.621486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.621505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.631694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.631750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.631771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.641819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.641853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.641873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.651662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.651707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.651723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.661647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.661676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.661710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.671531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.671561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.671580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.681721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.681756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.681775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.691773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.691808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.691828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.701253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.701283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.701301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.710922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.710956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.710976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.720820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.720855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.720875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.730981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.731017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.731036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.741054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.741089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.741108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.750852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.750887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.750907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.317 [2024-04-19 03:33:37.760592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.317 [2024-04-19 03:33:37.760623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.317 [2024-04-19 03:33:37.760642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.318 [2024-04-19 03:33:37.770611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252f850) 00:20:00.318 [2024-04-19 03:33:37.770640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.318 [2024-04-19 03:33:37.770658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.318 00:20:00.318 Latency(us) 00:20:00.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.318 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:00.318 nvme0n1 : 2.00 3120.12 390.02 0.00 0.00 5123.66 4587.52 10679.94 00:20:00.318 =================================================================================================================== 00:20:00.318 Total : 3120.12 390.02 0.00 0.00 5123.66 4587.52 10679.94 00:20:00.318 0 00:20:00.318 03:33:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:00.318 03:33:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:00.318 03:33:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:00.318 03:33:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:00.318 | .driver_specific 00:20:00.318 | .nvme_error 00:20:00.318 | .status_code 00:20:00.318 | .command_transient_transport_error' 00:20:00.577 03:33:38 -- host/digest.sh@71 -- # (( 201 > 0 )) 00:20:00.577 03:33:38 -- host/digest.sh@73 -- # killprocess 321345 00:20:00.577 03:33:38 -- common/autotest_common.sh@936 -- # '[' -z 321345 ']' 00:20:00.577 03:33:38 -- common/autotest_common.sh@940 -- # kill -0 321345 00:20:00.577 03:33:38 -- common/autotest_common.sh@941 -- # uname 00:20:00.577 03:33:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:00.577 03:33:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 321345 00:20:00.577 03:33:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:00.577 03:33:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:00.577 03:33:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 321345' 00:20:00.577 killing process with pid 321345 00:20:00.577 03:33:38 -- common/autotest_common.sh@955 -- # kill 321345 00:20:00.577 Received shutdown signal, test time was about 2.000000 seconds 00:20:00.577 00:20:00.577 Latency(us) 00:20:00.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.577 =================================================================================================================== 00:20:00.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.577 03:33:38 -- common/autotest_common.sh@960 -- # wait 321345 00:20:00.836 03:33:38 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:00.836 03:33:38 -- host/digest.sh@54 -- # local rw bs qd 00:20:00.836 03:33:38 -- host/digest.sh@56 -- # rw=randwrite 00:20:00.836 03:33:38 -- host/digest.sh@56 -- # bs=4096 00:20:00.836 03:33:38 -- host/digest.sh@56 -- # qd=128 00:20:00.836 03:33:38 -- host/digest.sh@58 -- # bperfpid=321870 00:20:00.836 03:33:38 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:00.836 03:33:38 -- host/digest.sh@60 -- # waitforlisten 321870 /var/tmp/bperf.sock 00:20:00.836 03:33:38 -- common/autotest_common.sh@817 -- # '[' -z 321870 ']' 00:20:00.836 03:33:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:00.836 03:33:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.836 03:33:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:00.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:00.836 03:33:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.836 03:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:00.836 [2024-04-19 03:33:38.364075] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:00.836 [2024-04-19 03:33:38.364146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321870 ] 00:20:00.836 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.094 [2024-04-19 03:33:38.424770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.094 [2024-04-19 03:33:38.540188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.352 03:33:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.352 03:33:38 -- common/autotest_common.sh@850 -- # return 0 00:20:01.352 03:33:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:01.352 03:33:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:01.352 03:33:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:01.352 03:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.352 03:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:01.610 03:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.610 03:33:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.610 03:33:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.868 nvme0n1 00:20:01.868 03:33:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:01.868 03:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.868 03:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:01.868 03:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.868 03:33:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:01.868 03:33:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:02.128 Running I/O for 2 seconds... 00:20:02.128 [2024-04-19 03:33:39.555014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fa7d8 00:20:02.128 [2024-04-19 03:33:39.555819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.128 [2024-04-19 03:33:39.555855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:02.128 [2024-04-19 03:33:39.567453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f4f40 00:20:02.128 [2024-04-19 03:33:39.568276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.128 [2024-04-19 03:33:39.568307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:02.128 [2024-04-19 03:33:39.579958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190efae0 00:20:02.128 [2024-04-19 03:33:39.580939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.580969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.592201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eea00 00:20:02.129 [2024-04-19 03:33:39.593193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.593222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.604428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed920 00:20:02.129 [2024-04-19 03:33:39.605326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.605355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.616746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f7538 00:20:02.129 [2024-04-19 03:33:39.617815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.617844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.629136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e4de8 00:20:02.129 [2024-04-19 03:33:39.630363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.630397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.641525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f2d80 00:20:02.129 [2024-04-19 03:33:39.642922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.642950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.651550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f35f0 00:20:02.129 [2024-04-19 03:33:39.652218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.652247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.664031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e5220 00:20:02.129 [2024-04-19 03:33:39.664896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.664925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.129 [2024-04-19 03:33:39.676507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fb480 00:20:02.129 [2024-04-19 03:33:39.677510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.129 [2024-04-19 03:33:39.677554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.687682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e9168 00:20:02.387 [2024-04-19 03:33:39.689543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.689572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.698079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e4de8 00:20:02.387 [2024-04-19 03:33:39.698844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.698872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.710573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f57b0 00:20:02.387 [2024-04-19 03:33:39.711518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.711546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.723023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed920 00:20:02.387 [2024-04-19 03:33:39.724143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.724171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.735406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fa3a0 00:20:02.387 [2024-04-19 03:33:39.736606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.736635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.747726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f0788 00:20:02.387 [2024-04-19 03:33:39.749165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.749198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.758755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6300 00:20:02.387 [2024-04-19 03:33:39.759669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.759699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.770801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f9b30 00:20:02.387 [2024-04-19 03:33:39.771592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.771620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.783237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e4de8 00:20:02.387 [2024-04-19 03:33:39.784203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.784248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.794280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e8d30 00:20:02.387 [2024-04-19 03:33:39.796087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.804662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190dece0 00:20:02.387 [2024-04-19 03:33:39.805521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.805550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.817437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f6cc8 00:20:02.387 [2024-04-19 03:33:39.818401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.818431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.829798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eb328 00:20:02.387 [2024-04-19 03:33:39.830890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.830918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.842237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ee190 00:20:02.387 [2024-04-19 03:33:39.843501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.843529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.854802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fc560 00:20:02.387 [2024-04-19 03:33:39.856159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.856187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.867268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e4578 00:20:02.387 [2024-04-19 03:33:39.868917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.868945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.879700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6b70 00:20:02.387 [2024-04-19 03:33:39.881449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.881478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.892108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f8618 00:20:02.387 [2024-04-19 03:33:39.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.894118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.900578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190de470 00:20:02.387 [2024-04-19 03:33:39.901320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.901348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.913059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6b70 00:20:02.387 [2024-04-19 03:33:39.913983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.914012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.925519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190df550 00:20:02.387 [2024-04-19 03:33:39.926571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.926600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.387 [2024-04-19 03:33:39.937842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e7818 00:20:02.387 [2024-04-19 03:33:39.939157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.387 [2024-04-19 03:33:39.939185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:39.950227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e4578 00:20:02.650 [2024-04-19 03:33:39.951520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:39.951549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:39.962180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fbcf0 00:20:02.650 [2024-04-19 03:33:39.963532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:39.963560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:39.973258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e3498 00:20:02.650 [2024-04-19 03:33:39.974526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:39.974556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:39.985894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f0ff8 00:20:02.650 [2024-04-19 03:33:39.987473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:39.987502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:39.999095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ef6a8 00:20:02.650 [2024-04-19 03:33:40.000863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.000892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.012263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190dfdc0 00:20:02.650 [2024-04-19 03:33:40.014289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.014320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.025829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f8a50 00:20:02.650 [2024-04-19 03:33:40.027989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.028024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.035219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6300 00:20:02.650 [2024-04-19 03:33:40.036095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.036128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.047651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed0b0 00:20:02.650 [2024-04-19 03:33:40.048538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.048568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.061311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f6458 00:20:02.650 [2024-04-19 03:33:40.062365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.062413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.074767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fbcf0 00:20:02.650 [2024-04-19 03:33:40.075824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.650 [2024-04-19 03:33:40.075856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.650 [2024-04-19 03:33:40.090100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fda78 00:20:02.650 [2024-04-19 03:33:40.091697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.091727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.100606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190feb58 00:20:02.651 [2024-04-19 03:33:40.101485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.101515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.113486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eb328 00:20:02.651 [2024-04-19 03:33:40.114321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.114353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.126652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ff3c8 00:20:02.651 [2024-04-19 03:33:40.127687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.127715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.139944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e9e10 00:20:02.651 [2024-04-19 03:33:40.141147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.141179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.153337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fd208 00:20:02.651 [2024-04-19 03:33:40.154720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.154749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.166764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eee38 00:20:02.651 [2024-04-19 03:33:40.168327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.168360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.179542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eea00 00:20:02.651 [2024-04-19 03:33:40.181270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.181302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.192938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eb760 00:20:02.651 [2024-04-19 03:33:40.194835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.651 [2024-04-19 03:33:40.194867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.651 [2024-04-19 03:33:40.204671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ee190 00:20:02.974 [2024-04-19 03:33:40.205894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.205925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.216046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f35f0 00:20:02.974 [2024-04-19 03:33:40.217946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.217979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.229587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190df988 00:20:02.974 [2024-04-19 03:33:40.231823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.231857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.240862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e5220 00:20:02.974 [2024-04-19 03:33:40.241894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.241928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.254206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e95a0 00:20:02.974 [2024-04-19 03:33:40.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.255460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.267470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ecc78 00:20:02.974 [2024-04-19 03:33:40.268809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.268843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.280775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fef90 00:20:02.974 [2024-04-19 03:33:40.282306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.282340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.294200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190de470 00:20:02.974 [2024-04-19 03:33:40.295909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.295941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.307659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e1f80 00:20:02.974 [2024-04-19 03:33:40.309567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.309595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.319711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e3498 00:20:02.974 [2024-04-19 03:33:40.321093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.321128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.331537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e84c0 00:20:02.974 [2024-04-19 03:33:40.333211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.333245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.344715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e01f8 00:20:02.974 [2024-04-19 03:33:40.346809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.346839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.355977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6738 00:20:02.974 [2024-04-19 03:33:40.356985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.357017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.369291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eb760 00:20:02.974 [2024-04-19 03:33:40.370477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.370505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.382568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eea00 00:20:02.974 [2024-04-19 03:33:40.383925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.383957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.395937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f6890 00:20:02.974 [2024-04-19 03:33:40.397508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.397556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.409344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f96f8 00:20:02.974 [2024-04-19 03:33:40.411048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.411082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.422716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e1710 00:20:02.974 [2024-04-19 03:33:40.424643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.424671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.436237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ee190 00:20:02.974 [2024-04-19 03:33:40.438321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.438353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.974 [2024-04-19 03:33:40.445284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f2d80 00:20:02.974 [2024-04-19 03:33:40.446117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-04-19 03:33:40.446150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:02.975 [2024-04-19 03:33:40.458447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6b70 00:20:02.975 [2024-04-19 03:33:40.459304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-04-19 03:33:40.459344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.975 [2024-04-19 03:33:40.472643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f9b30 00:20:02.975 [2024-04-19 03:33:40.474753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-04-19 03:33:40.474783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.975 [2024-04-19 03:33:40.484026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6300 00:20:02.975 [2024-04-19 03:33:40.485073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-04-19 03:33:40.485108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.975 [2024-04-19 03:33:40.497544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e1f80 00:20:03.234 [2024-04-19 03:33:40.498799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.498830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.511248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eaab8 00:20:03.234 [2024-04-19 03:33:40.512643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.512692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.524736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed920 00:20:03.234 [2024-04-19 03:33:40.526283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.526315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.538112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e7818 00:20:03.234 [2024-04-19 03:33:40.539865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.539895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.551458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed4e8 00:20:03.234 [2024-04-19 03:33:40.553316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.553347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.564730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fda78 00:20:03.234 [2024-04-19 03:33:40.566758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.566785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.573806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190de470 00:20:03.234 [2024-04-19 03:33:40.574632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.587214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f96f8 00:20:03.234 [2024-04-19 03:33:40.588210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.588241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.599921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f92c0 00:20:03.234 [2024-04-19 03:33:40.601074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.601104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.613275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e27f0 00:20:03.234 [2024-04-19 03:33:40.614626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.614654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.626675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f2d80 00:20:03.234 [2024-04-19 03:33:40.628232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.628263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.640105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fc560 00:20:03.234 [2024-04-19 03:33:40.641772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.641802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.653447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f1430 00:20:03.234 [2024-04-19 03:33:40.655289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.655320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.666685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ec408 00:20:03.234 [2024-04-19 03:33:40.668723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.668750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.675672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fcdd0 00:20:03.234 [2024-04-19 03:33:40.676500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.676544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.690026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190de8a8 00:20:03.234 [2024-04-19 03:33:40.692095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.692126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.702354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f8618 00:20:03.234 [2024-04-19 03:33:40.703407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.703449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.714299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ff3c8 00:20:03.234 [2024-04-19 03:33:40.715293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.715324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.727685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e8088 00:20:03.234 [2024-04-19 03:33:40.728836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.728867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.740980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed0b0 00:20:03.234 [2024-04-19 03:33:40.742333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.742364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.754304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fe2e8 00:20:03.234 [2024-04-19 03:33:40.755836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.234 [2024-04-19 03:33:40.755867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.234 [2024-04-19 03:33:40.767768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f1868 00:20:03.235 [2024-04-19 03:33:40.769482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.235 [2024-04-19 03:33:40.769509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.235 [2024-04-19 03:33:40.781092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f1ca0 00:20:03.235 [2024-04-19 03:33:40.782944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.235 [2024-04-19 03:33:40.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.794314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fa3a0 00:20:03.493 [2024-04-19 03:33:40.796462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.796491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.803396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f92c0 00:20:03.493 [2024-04-19 03:33:40.804205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.804236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.816113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f96f8 00:20:03.493 [2024-04-19 03:33:40.817109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.817139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.829397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ebfd0 00:20:03.493 [2024-04-19 03:33:40.830544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.830586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.842823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eea00 00:20:03.493 [2024-04-19 03:33:40.844160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.844196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.856082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ee190 00:20:03.493 [2024-04-19 03:33:40.857610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.857637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.869499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f0350 00:20:03.493 [2024-04-19 03:33:40.871201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.871232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.882922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f2d80 00:20:03.493 [2024-04-19 03:33:40.884810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.884840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.896268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f0bc0 00:20:03.493 [2024-04-19 03:33:40.898355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.898393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.905474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f20d8 00:20:03.493 [2024-04-19 03:33:40.906290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.906320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.918502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fb048 00:20:03.493 [2024-04-19 03:33:40.919333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.919363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.932584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f8618 00:20:03.493 [2024-04-19 03:33:40.934621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.934649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.943702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f92c0 00:20:03.493 [2024-04-19 03:33:40.944681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.493 [2024-04-19 03:33:40.944707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:03.493 [2024-04-19 03:33:40.957036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190de8a8 00:20:03.494 [2024-04-19 03:33:40.958216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:40.958246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:40.970497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eb328 00:20:03.494 [2024-04-19 03:33:40.971834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:40.971865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:40.983778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ec840 00:20:03.494 [2024-04-19 03:33:40.985283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:40.985315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:40.996839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190df988 00:20:03.494 [2024-04-19 03:33:40.998377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:40.998410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:41.009824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f7100 00:20:03.494 [2024-04-19 03:33:41.011739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:41.011766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:41.022686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6300 00:20:03.494 [2024-04-19 03:33:41.024588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:41.024616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:41.031187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f5378 00:20:03.494 [2024-04-19 03:33:41.031953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:41.031979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.494 [2024-04-19 03:33:41.044109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f0350 00:20:03.494 [2024-04-19 03:33:41.045023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.494 [2024-04-19 03:33:41.045050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.056750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e1b48 00:20:03.752 [2024-04-19 03:33:41.057852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.057878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.069212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fb048 00:20:03.752 [2024-04-19 03:33:41.070468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.070496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.081748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e49b0 00:20:03.752 [2024-04-19 03:33:41.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.083269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.094019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fd640 00:20:03.752 [2024-04-19 03:33:41.095477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.095504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.104723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f81e0 00:20:03.752 [2024-04-19 03:33:41.106617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.106644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.115157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ebfd0 00:20:03.752 [2024-04-19 03:33:41.116157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.116183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.127535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f4298 00:20:03.752 [2024-04-19 03:33:41.128566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.128593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.139826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190edd58 00:20:03.752 [2024-04-19 03:33:41.141081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.141107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.152238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190eff18 00:20:03.752 [2024-04-19 03:33:41.153624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.153651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.164719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ecc78 00:20:03.752 [2024-04-19 03:33:41.166322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.166355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.177304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f7538 00:20:03.752 [2024-04-19 03:33:41.179075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.189840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f6890 00:20:03.752 [2024-04-19 03:33:41.191621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.191648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.202233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f4f40 00:20:03.752 [2024-04-19 03:33:41.204236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.204263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.210816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fac10 00:20:03.752 [2024-04-19 03:33:41.211723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.211749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.223202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e5220 00:20:03.752 [2024-04-19 03:33:41.224354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.224387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.235398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f0788 00:20:03.752 [2024-04-19 03:33:41.236431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.236457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.752 [2024-04-19 03:33:41.247318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e9168 00:20:03.752 [2024-04-19 03:33:41.248461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.752 [2024-04-19 03:33:41.248487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.753 [2024-04-19 03:33:41.259328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6b70 00:20:03.753 [2024-04-19 03:33:41.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.753 [2024-04-19 03:33:41.260519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.753 [2024-04-19 03:33:41.271303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e88f8 00:20:03.753 [2024-04-19 03:33:41.272392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.753 [2024-04-19 03:33:41.272419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.753 [2024-04-19 03:33:41.283336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f9f68 00:20:03.753 [2024-04-19 03:33:41.284424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.753 [2024-04-19 03:33:41.284450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.753 [2024-04-19 03:33:41.295298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fc560 00:20:03.753 [2024-04-19 03:33:41.296420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.753 [2024-04-19 03:33:41.296447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.753 [2024-04-19 03:33:41.307216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fb8b8 00:20:03.753 [2024-04-19 03:33:41.308335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.753 [2024-04-19 03:33:41.308363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.319334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f5be8 00:20:04.011 [2024-04-19 03:33:41.320478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.331808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ecc78 00:20:04.011 [2024-04-19 03:33:41.333110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.333137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.344119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ed0b0 00:20:04.011 [2024-04-19 03:33:41.345437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.345465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.356137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ee5c8 00:20:04.011 [2024-04-19 03:33:41.357363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.357399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.368114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190df988 00:20:04.011 [2024-04-19 03:33:41.369372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.369406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.380126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f7970 00:20:04.011 [2024-04-19 03:33:41.381413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.381441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.392210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e6300 00:20:04.011 [2024-04-19 03:33:41.393438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.393466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.404180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fdeb0 00:20:04.011 [2024-04-19 03:33:41.405459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.405487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.416147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e0a68 00:20:04.011 [2024-04-19 03:33:41.417535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.417563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.428203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ff3c8 00:20:04.011 [2024-04-19 03:33:41.429497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.429524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.440085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e5ec8 00:20:04.011 [2024-04-19 03:33:41.441310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.011 [2024-04-19 03:33:41.441337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.011 [2024-04-19 03:33:41.452104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190ddc00 00:20:04.012 [2024-04-19 03:33:41.453428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.453454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.464100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fa3a0 00:20:04.012 [2024-04-19 03:33:41.465397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.465423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.476119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e9e10 00:20:04.012 [2024-04-19 03:33:41.477411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.477440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.488103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e3060 00:20:04.012 [2024-04-19 03:33:41.489355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.489391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.500133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fe720 00:20:04.012 [2024-04-19 03:33:41.501441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.501469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.512126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190f7100 00:20:04.012 [2024-04-19 03:33:41.513338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.513365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.524120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190e4140 00:20:04.012 [2024-04-19 03:33:41.525485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.525513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 [2024-04-19 03:33:41.536156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f4c0) with pdu=0x2000190fbcf0 00:20:04.012 [2024-04-19 03:33:41.537450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.012 [2024-04-19 03:33:41.537477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:04.012 00:20:04.012 Latency(us) 00:20:04.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.012 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:04.012 nvme0n1 : 2.00 20599.55 80.47 0.00 0.00 6204.02 2548.62 15922.82 00:20:04.012 =================================================================================================================== 00:20:04.012 Total : 20599.55 80.47 0.00 0.00 6204.02 2548.62 15922.82 00:20:04.012 0 00:20:04.012 03:33:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:04.012 03:33:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:04.012 03:33:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:04.012 | .driver_specific 00:20:04.012 | .nvme_error 00:20:04.012 | .status_code 00:20:04.012 | .command_transient_transport_error' 00:20:04.012 03:33:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:04.270 03:33:41 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:20:04.270 03:33:41 -- host/digest.sh@73 -- # killprocess 321870 00:20:04.270 03:33:41 -- common/autotest_common.sh@936 -- # '[' -z 321870 ']' 00:20:04.270 03:33:41 -- common/autotest_common.sh@940 -- # kill -0 321870 00:20:04.270 03:33:41 -- common/autotest_common.sh@941 -- # uname 00:20:04.270 03:33:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.270 03:33:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 321870 00:20:04.529 03:33:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:04.529 03:33:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:04.529 03:33:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 321870' 00:20:04.529 killing process with pid 321870 00:20:04.529 03:33:41 -- common/autotest_common.sh@955 -- # kill 321870 00:20:04.529 Received shutdown signal, test time was about 2.000000 seconds 00:20:04.529 00:20:04.529 Latency(us) 00:20:04.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.529 =================================================================================================================== 00:20:04.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.529 03:33:41 -- common/autotest_common.sh@960 -- # wait 321870 00:20:04.787 03:33:42 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:04.787 03:33:42 -- host/digest.sh@54 -- # local rw bs qd 00:20:04.787 03:33:42 -- host/digest.sh@56 -- # rw=randwrite 00:20:04.787 03:33:42 -- host/digest.sh@56 -- # bs=131072 00:20:04.787 03:33:42 -- host/digest.sh@56 -- # qd=16 00:20:04.787 03:33:42 -- host/digest.sh@58 -- # bperfpid=322276 00:20:04.787 03:33:42 -- host/digest.sh@60 -- # waitforlisten 322276 /var/tmp/bperf.sock 00:20:04.787 03:33:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:04.787 03:33:42 -- common/autotest_common.sh@817 -- # '[' -z 322276 ']' 00:20:04.787 03:33:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:04.787 03:33:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.787 03:33:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:04.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:04.787 03:33:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.787 03:33:42 -- common/autotest_common.sh@10 -- # set +x 00:20:04.787 [2024-04-19 03:33:42.166767] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:04.788 [2024-04-19 03:33:42.166841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322276 ] 00:20:04.788 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:04.788 Zero copy mechanism will not be used. 00:20:04.788 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.788 [2024-04-19 03:33:42.227928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.788 [2024-04-19 03:33:42.343232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.722 03:33:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.722 03:33:43 -- common/autotest_common.sh@850 -- # return 0 00:20:05.722 03:33:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:05.722 03:33:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:05.980 03:33:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:05.980 03:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.980 03:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:05.980 03:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.980 03:33:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:05.980 03:33:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:06.238 nvme0n1 00:20:06.503 03:33:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:06.503 03:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.503 03:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:06.503 03:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.503 03:33:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:06.503 03:33:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:06.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:06.503 Zero copy mechanism will not be used. 00:20:06.503 Running I/O for 2 seconds... 00:20:06.503 [2024-04-19 03:33:43.931904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.503 [2024-04-19 03:33:43.932314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.503 [2024-04-19 03:33:43.932367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.503 [2024-04-19 03:33:43.947243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.503 [2024-04-19 03:33:43.947618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.503 [2024-04-19 03:33:43.947649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.503 [2024-04-19 03:33:43.961390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.503 [2024-04-19 03:33:43.961672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.503 [2024-04-19 03:33:43.961702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.503 [2024-04-19 03:33:43.975860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.503 [2024-04-19 03:33:43.976250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.503 [2024-04-19 03:33:43.976278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.503 [2024-04-19 03:33:43.991509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.504 [2024-04-19 03:33:43.991878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.504 [2024-04-19 03:33:43.991907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.504 [2024-04-19 03:33:44.006295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.504 [2024-04-19 03:33:44.006660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.504 [2024-04-19 03:33:44.006690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.504 [2024-04-19 03:33:44.020736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.504 [2024-04-19 03:33:44.021115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.504 [2024-04-19 03:33:44.021143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.504 [2024-04-19 03:33:44.034219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.504 [2024-04-19 03:33:44.034594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.504 [2024-04-19 03:33:44.034638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.504 [2024-04-19 03:33:44.047684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.504 [2024-04-19 03:33:44.048059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.504 [2024-04-19 03:33:44.048103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.762 [2024-04-19 03:33:44.062002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.762 [2024-04-19 03:33:44.062374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-04-19 03:33:44.062424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.762 [2024-04-19 03:33:44.075576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.762 [2024-04-19 03:33:44.075935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-04-19 03:33:44.075981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.762 [2024-04-19 03:33:44.089893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.762 [2024-04-19 03:33:44.090254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-04-19 03:33:44.090296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.762 [2024-04-19 03:33:44.104220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.762 [2024-04-19 03:33:44.104596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-04-19 03:33:44.104639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.762 [2024-04-19 03:33:44.118663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.762 [2024-04-19 03:33:44.119021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-04-19 03:33:44.119063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.762 [2024-04-19 03:33:44.130633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.762 [2024-04-19 03:33:44.131091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-04-19 03:33:44.131117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.142815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.143184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.143213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.155733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.156095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.156129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.170333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.170697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.170743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.183929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.184317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.184345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.198638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.199011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.199058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.212069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.212351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.212401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.226144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.226533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.226563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.238483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.238836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.238881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.252124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.252502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.252530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.266652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.266999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.267041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.282562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.282919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.282968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.296914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.297269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.297311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.763 [2024-04-19 03:33:44.310765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:06.763 [2024-04-19 03:33:44.311131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-04-19 03:33:44.311159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.021 [2024-04-19 03:33:44.325360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.021 [2024-04-19 03:33:44.325753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.021 [2024-04-19 03:33:44.325780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.021 [2024-04-19 03:33:44.339968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.021 [2024-04-19 03:33:44.340309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.021 [2024-04-19 03:33:44.340336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.021 [2024-04-19 03:33:44.353344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.021 [2024-04-19 03:33:44.353721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.021 [2024-04-19 03:33:44.353749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.021 [2024-04-19 03:33:44.367722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.021 [2024-04-19 03:33:44.368113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.021 [2024-04-19 03:33:44.368155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.021 [2024-04-19 03:33:44.382113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.021 [2024-04-19 03:33:44.382390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.021 [2024-04-19 03:33:44.382418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.396922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.397287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.397328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.410863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.411218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.411246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.424726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.425067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.425096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.439489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.439867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.439895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.453036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.453415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.453458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.467852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.468256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.468283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.481583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.481927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.481969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.495352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.495744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.495772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.510136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.510521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.510548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.524656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.525038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.525087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.538424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.538810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.551977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.552344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.552393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.022 [2024-04-19 03:33:44.566833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.022 [2024-04-19 03:33:44.567193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.022 [2024-04-19 03:33:44.567222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.580291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.580640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.580673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.593971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.594330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.594376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.608534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.608883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.608929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.622173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.622522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.622552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.635626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.636049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.636075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.647942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.648317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.648345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.662636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.662978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.663006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.674682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.675010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.675038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.688183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.688531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.688560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.702804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.703147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.703190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.717268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.717619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.717648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.732213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.732589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.732618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.746094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.746525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.760813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.761216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.761259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.774515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.774856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.281 [2024-04-19 03:33:44.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.281 [2024-04-19 03:33:44.788507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.281 [2024-04-19 03:33:44.788893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.282 [2024-04-19 03:33:44.788922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.282 [2024-04-19 03:33:44.802093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.282 [2024-04-19 03:33:44.802480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.282 [2024-04-19 03:33:44.802526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.282 [2024-04-19 03:33:44.815557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.282 [2024-04-19 03:33:44.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.282 [2024-04-19 03:33:44.815927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.282 [2024-04-19 03:33:44.830507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.282 [2024-04-19 03:33:44.830878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.282 [2024-04-19 03:33:44.830920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.842602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.842929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.842958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.856094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.856473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.856517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.870945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.871304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.871332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.885626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.885966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.886016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.898495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.912266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.912643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.912673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.926325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.926683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.926728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.940200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.940573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.940601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.952871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.953124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.953153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.967054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.967419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.967447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.981157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.981543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.981572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:44.993190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:44.993496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:44.993527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.007062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.007435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.007479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.020945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.021297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.021327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.036394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.036738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.036767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.050838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.051204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.051247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.063497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.063842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.063888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.075467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.075809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.075837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.541 [2024-04-19 03:33:45.090108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.541 [2024-04-19 03:33:45.090369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.541 [2024-04-19 03:33:45.090404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.104373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.104755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.104786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.117199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.117374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.117409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.131032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.131371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.131408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.145312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.145686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.145715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.160445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.160788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.160817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.173347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.173712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.173741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.187777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.188119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.188147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.201478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.201850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.201877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.216171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.216519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.231559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.231903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.245570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.245912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.245962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.259835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.260191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.260218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.273626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.273969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.274015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.286399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.286742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.286784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.299914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.300285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.800 [2024-04-19 03:33:45.300328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.800 [2024-04-19 03:33:45.313593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.800 [2024-04-19 03:33:45.313935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.801 [2024-04-19 03:33:45.313962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.801 [2024-04-19 03:33:45.326798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.801 [2024-04-19 03:33:45.327136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.801 [2024-04-19 03:33:45.327164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.801 [2024-04-19 03:33:45.340530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.801 [2024-04-19 03:33:45.340870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.801 [2024-04-19 03:33:45.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.801 [2024-04-19 03:33:45.355018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:07.801 [2024-04-19 03:33:45.355360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.801 [2024-04-19 03:33:45.355411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.368939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.369291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.369320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.382649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.383007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.383034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.396005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.396347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.396376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.409737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.410078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.410107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.422769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.423137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.423181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.435568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.435827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.435855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.450566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.450950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.450994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.463624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.463994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.478505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.478847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.478877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.493741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.494122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.494151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.507742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.508126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.508168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.520367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.520741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.520769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.534966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.535345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.535394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.549219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.549592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.549620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.563190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.563659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.563703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.578467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.578815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.578843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.592923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.593248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.593290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.060 [2024-04-19 03:33:45.607574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.060 [2024-04-19 03:33:45.607790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.060 [2024-04-19 03:33:45.607824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.620579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.620920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.620949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.634053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.634432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.634461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.646844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.647076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.647104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.661662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.662033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.662061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.676332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.676737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.690892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.691252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.691294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.705198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.705528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.705557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.719317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.719679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.719707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.732958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.733326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.747491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.747846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.747887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.762064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.762450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.762477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.775960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.776304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.776345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.789924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.790264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.790306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.804158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.804577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.804605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.816346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.816721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.816764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.830146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.830519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.830565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.842414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.842642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.842669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.857709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.858112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.858154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.319 [2024-04-19 03:33:45.870633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.319 [2024-04-19 03:33:45.871018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.319 [2024-04-19 03:33:45.871060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.578 [2024-04-19 03:33:45.885307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.578 [2024-04-19 03:33:45.885666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.578 [2024-04-19 03:33:45.885696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.578 [2024-04-19 03:33:45.898694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.578 [2024-04-19 03:33:45.899061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.578 [2024-04-19 03:33:45.899109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.578 [2024-04-19 03:33:45.911717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x71f9a0) with pdu=0x2000190fef90 00:20:08.578 [2024-04-19 03:33:45.912075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.579 [2024-04-19 03:33:45.912102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.579 00:20:08.579 Latency(us) 00:20:08.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.579 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:08.579 nvme0n1 : 2.01 2223.20 277.90 0.00 0.00 7181.22 5218.61 16311.18 00:20:08.579 =================================================================================================================== 00:20:08.579 Total : 2223.20 277.90 0.00 0.00 7181.22 5218.61 16311.18 00:20:08.579 0 00:20:08.579 03:33:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:08.579 03:33:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:08.579 03:33:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:08.579 | .driver_specific 00:20:08.579 | .nvme_error 00:20:08.579 | .status_code 00:20:08.579 | .command_transient_transport_error' 00:20:08.579 03:33:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:08.838 03:33:46 -- host/digest.sh@71 -- # (( 143 > 0 )) 00:20:08.838 03:33:46 -- host/digest.sh@73 -- # killprocess 322276 00:20:08.838 03:33:46 -- common/autotest_common.sh@936 -- # '[' -z 322276 ']' 00:20:08.838 03:33:46 -- common/autotest_common.sh@940 -- # kill -0 322276 00:20:08.838 03:33:46 -- common/autotest_common.sh@941 -- # uname 00:20:08.838 03:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.838 03:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 322276 00:20:08.838 03:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:08.838 03:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:08.838 03:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 322276' 00:20:08.838 killing process with pid 322276 00:20:08.838 03:33:46 -- common/autotest_common.sh@955 -- # kill 322276 00:20:08.838 Received shutdown signal, test time was about 2.000000 seconds 00:20:08.838 00:20:08.838 Latency(us) 00:20:08.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.838 =================================================================================================================== 00:20:08.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.838 03:33:46 -- common/autotest_common.sh@960 -- # wait 322276 00:20:09.096 03:33:46 -- host/digest.sh@116 -- # killprocess 320779 00:20:09.096 03:33:46 -- common/autotest_common.sh@936 -- # '[' -z 320779 ']' 00:20:09.096 03:33:46 -- common/autotest_common.sh@940 -- # kill -0 320779 00:20:09.096 03:33:46 -- common/autotest_common.sh@941 -- # uname 00:20:09.096 03:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.096 03:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 320779 00:20:09.096 03:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:09.096 03:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:09.096 03:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 320779' 00:20:09.096 killing process with pid 320779 00:20:09.096 03:33:46 -- common/autotest_common.sh@955 -- # kill 320779 00:20:09.096 03:33:46 -- common/autotest_common.sh@960 -- # wait 320779 00:20:09.354 00:20:09.354 real 0m16.848s 00:20:09.354 user 0m32.955s 00:20:09.354 sys 0m4.312s 00:20:09.354 03:33:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:09.354 03:33:46 -- common/autotest_common.sh@10 -- # set +x 00:20:09.354 ************************************ 00:20:09.354 END TEST nvmf_digest_error 00:20:09.354 ************************************ 00:20:09.354 03:33:46 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:09.354 03:33:46 -- host/digest.sh@150 -- # nvmftestfini 00:20:09.354 03:33:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:09.354 03:33:46 -- nvmf/common.sh@117 -- # sync 00:20:09.354 03:33:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.354 03:33:46 -- nvmf/common.sh@120 -- # set +e 00:20:09.354 03:33:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.354 03:33:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.354 rmmod nvme_tcp 00:20:09.354 rmmod nvme_fabrics 00:20:09.354 rmmod nvme_keyring 00:20:09.354 03:33:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.355 03:33:46 -- nvmf/common.sh@124 -- # set -e 00:20:09.355 03:33:46 -- nvmf/common.sh@125 -- # return 0 00:20:09.355 03:33:46 -- nvmf/common.sh@478 -- # '[' -n 320779 ']' 00:20:09.355 03:33:46 -- nvmf/common.sh@479 -- # killprocess 320779 00:20:09.355 03:33:46 -- common/autotest_common.sh@936 -- # '[' -z 320779 ']' 00:20:09.355 03:33:46 -- common/autotest_common.sh@940 -- # kill -0 320779 00:20:09.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (320779) - No such process 00:20:09.355 03:33:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 320779 is not found' 00:20:09.355 Process with pid 320779 is not found 00:20:09.355 03:33:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:09.355 03:33:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:09.355 03:33:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:09.355 03:33:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.355 03:33:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.355 03:33:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.355 03:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.355 03:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.893 03:33:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:11.894 00:20:11.894 real 0m37.497s 00:20:11.894 user 1m6.479s 00:20:11.894 sys 0m9.759s 00:20:11.894 03:33:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:11.894 03:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.894 ************************************ 00:20:11.894 END TEST nvmf_digest 00:20:11.894 ************************************ 00:20:11.894 03:33:48 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:11.894 03:33:48 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:11.894 03:33:48 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:11.894 03:33:48 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:11.894 03:33:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:11.894 03:33:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:11.894 03:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.894 ************************************ 00:20:11.894 START TEST nvmf_bdevperf 00:20:11.894 ************************************ 00:20:11.894 03:33:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:11.894 * Looking for test storage... 00:20:11.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:11.894 03:33:49 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.894 03:33:49 -- nvmf/common.sh@7 -- # uname -s 00:20:11.894 03:33:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.894 03:33:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.894 03:33:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.894 03:33:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.894 03:33:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.894 03:33:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.894 03:33:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.894 03:33:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.894 03:33:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.894 03:33:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.894 03:33:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.894 03:33:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.894 03:33:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.894 03:33:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.894 03:33:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.894 03:33:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.894 03:33:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.894 03:33:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.894 03:33:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.894 03:33:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.894 03:33:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.894 03:33:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.894 03:33:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.894 03:33:49 -- paths/export.sh@5 -- # export PATH 00:20:11.894 03:33:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.894 03:33:49 -- nvmf/common.sh@47 -- # : 0 00:20:11.894 03:33:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:11.894 03:33:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:11.894 03:33:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.894 03:33:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.894 03:33:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.894 03:33:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:11.894 03:33:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:11.894 03:33:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:11.894 03:33:49 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.894 03:33:49 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.894 03:33:49 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:11.894 03:33:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:11.894 03:33:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.894 03:33:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:11.894 03:33:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:11.894 03:33:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:11.894 03:33:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.894 03:33:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.894 03:33:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.894 03:33:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:11.894 03:33:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:11.894 03:33:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:11.894 03:33:49 -- common/autotest_common.sh@10 -- # set +x 00:20:13.798 03:33:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.798 03:33:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.798 03:33:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.798 03:33:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.798 03:33:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.798 03:33:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.798 03:33:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.798 03:33:51 -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.798 03:33:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.798 03:33:51 -- nvmf/common.sh@296 -- # e810=() 00:20:13.798 03:33:51 -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.798 03:33:51 -- nvmf/common.sh@297 -- # x722=() 00:20:13.798 03:33:51 -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.798 03:33:51 -- nvmf/common.sh@298 -- # mlx=() 00:20:13.798 03:33:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.798 03:33:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.798 03:33:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.798 03:33:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.798 03:33:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.798 03:33:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.798 03:33:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:13.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:13.798 03:33:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.798 03:33:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:13.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:13.798 03:33:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.798 03:33:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.798 03:33:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.798 03:33:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.798 03:33:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.798 03:33:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:13.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:13.798 03:33:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.798 03:33:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.798 03:33:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.798 03:33:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.798 03:33:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.798 03:33:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:13.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:13.798 03:33:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.798 03:33:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:13.798 03:33:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:13.798 03:33:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:13.798 03:33:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:13.798 03:33:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.798 03:33:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.798 03:33:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.798 03:33:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.798 03:33:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.798 03:33:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.798 03:33:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.798 03:33:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.798 03:33:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.798 03:33:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.798 03:33:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.798 03:33:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.798 03:33:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.798 03:33:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.798 03:33:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.798 03:33:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.798 03:33:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.798 03:33:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.798 03:33:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.799 03:33:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:13.799 00:20:13.799 --- 10.0.0.2 ping statistics --- 00:20:13.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.799 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:13.799 03:33:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:20:13.799 00:20:13.799 --- 10.0.0.1 ping statistics --- 00:20:13.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.799 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:13.799 03:33:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.799 03:33:51 -- nvmf/common.sh@411 -- # return 0 00:20:13.799 03:33:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:13.799 03:33:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.799 03:33:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:13.799 03:33:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:13.799 03:33:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.799 03:33:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:13.799 03:33:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:14.057 03:33:51 -- host/bdevperf.sh@25 -- # tgt_init 00:20:14.057 03:33:51 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:14.057 03:33:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:14.057 03:33:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:14.057 03:33:51 -- common/autotest_common.sh@10 -- # set +x 00:20:14.057 03:33:51 -- nvmf/common.sh@470 -- # nvmfpid=324770 00:20:14.057 03:33:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:14.057 03:33:51 -- nvmf/common.sh@471 -- # waitforlisten 324770 00:20:14.057 03:33:51 -- common/autotest_common.sh@817 -- # '[' -z 324770 ']' 00:20:14.057 03:33:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.057 03:33:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.057 03:33:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.057 03:33:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.057 03:33:51 -- common/autotest_common.sh@10 -- # set +x 00:20:14.057 [2024-04-19 03:33:51.417154] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:14.058 [2024-04-19 03:33:51.417234] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.058 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.058 [2024-04-19 03:33:51.485506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.058 [2024-04-19 03:33:51.601529] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.058 [2024-04-19 03:33:51.601585] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.058 [2024-04-19 03:33:51.601598] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.058 [2024-04-19 03:33:51.601609] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.058 [2024-04-19 03:33:51.601618] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.058 [2024-04-19 03:33:51.601899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.058 [2024-04-19 03:33:51.601960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.058 [2024-04-19 03:33:51.601963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.992 03:33:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.992 03:33:52 -- common/autotest_common.sh@850 -- # return 0 00:20:14.992 03:33:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.992 03:33:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.992 03:33:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.992 03:33:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.992 03:33:52 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.992 03:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.992 03:33:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.992 [2024-04-19 03:33:52.358997] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.992 03:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.992 03:33:52 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.992 03:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.992 03:33:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.992 Malloc0 00:20:14.992 03:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.992 03:33:52 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.992 03:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.992 03:33:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.992 03:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.992 03:33:52 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.992 03:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.992 03:33:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.992 03:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.992 03:33:52 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.992 03:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.992 03:33:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.992 [2024-04-19 03:33:52.415838] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.992 03:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.992 03:33:52 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:14.992 03:33:52 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:14.992 03:33:52 -- nvmf/common.sh@521 -- # config=() 00:20:14.992 03:33:52 -- nvmf/common.sh@521 -- # local subsystem config 00:20:14.992 03:33:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:14.992 03:33:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:14.992 { 00:20:14.992 "params": { 00:20:14.992 "name": "Nvme$subsystem", 00:20:14.992 "trtype": "$TEST_TRANSPORT", 00:20:14.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.992 "adrfam": "ipv4", 00:20:14.992 "trsvcid": "$NVMF_PORT", 00:20:14.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.992 "hdgst": ${hdgst:-false}, 00:20:14.992 "ddgst": ${ddgst:-false} 00:20:14.992 }, 00:20:14.992 "method": "bdev_nvme_attach_controller" 00:20:14.992 } 00:20:14.992 EOF 00:20:14.992 )") 00:20:14.992 03:33:52 -- nvmf/common.sh@543 -- # cat 00:20:14.993 03:33:52 -- nvmf/common.sh@545 -- # jq . 00:20:14.993 03:33:52 -- nvmf/common.sh@546 -- # IFS=, 00:20:14.993 03:33:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:14.993 "params": { 00:20:14.993 "name": "Nvme1", 00:20:14.993 "trtype": "tcp", 00:20:14.993 "traddr": "10.0.0.2", 00:20:14.993 "adrfam": "ipv4", 00:20:14.993 "trsvcid": "4420", 00:20:14.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.993 "hdgst": false, 00:20:14.993 "ddgst": false 00:20:14.993 }, 00:20:14.993 "method": "bdev_nvme_attach_controller" 00:20:14.993 }' 00:20:14.993 [2024-04-19 03:33:52.459120] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:14.993 [2024-04-19 03:33:52.459215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324925 ] 00:20:14.993 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.993 [2024-04-19 03:33:52.520000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.251 [2024-04-19 03:33:52.631664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.509 Running I/O for 1 seconds... 00:20:16.446 00:20:16.446 Latency(us) 00:20:16.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.446 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:16.446 Verification LBA range: start 0x0 length 0x4000 00:20:16.446 Nvme1n1 : 1.01 8580.63 33.52 0.00 0.00 14855.87 1286.45 16214.09 00:20:16.446 =================================================================================================================== 00:20:16.446 Total : 8580.63 33.52 0.00 0.00 14855.87 1286.45 16214.09 00:20:16.705 03:33:54 -- host/bdevperf.sh@30 -- # bdevperfpid=325181 00:20:16.705 03:33:54 -- host/bdevperf.sh@32 -- # sleep 3 00:20:16.705 03:33:54 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:16.705 03:33:54 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:16.705 03:33:54 -- nvmf/common.sh@521 -- # config=() 00:20:16.705 03:33:54 -- nvmf/common.sh@521 -- # local subsystem config 00:20:16.705 03:33:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.705 03:33:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.705 { 00:20:16.705 "params": { 00:20:16.705 "name": "Nvme$subsystem", 00:20:16.705 "trtype": "$TEST_TRANSPORT", 00:20:16.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.705 "adrfam": "ipv4", 00:20:16.705 "trsvcid": "$NVMF_PORT", 00:20:16.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.705 "hdgst": ${hdgst:-false}, 00:20:16.705 "ddgst": ${ddgst:-false} 00:20:16.705 }, 00:20:16.705 "method": "bdev_nvme_attach_controller" 00:20:16.705 } 00:20:16.705 EOF 00:20:16.705 )") 00:20:16.705 03:33:54 -- nvmf/common.sh@543 -- # cat 00:20:16.705 03:33:54 -- nvmf/common.sh@545 -- # jq . 00:20:16.705 03:33:54 -- nvmf/common.sh@546 -- # IFS=, 00:20:16.705 03:33:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:16.705 "params": { 00:20:16.705 "name": "Nvme1", 00:20:16.705 "trtype": "tcp", 00:20:16.705 "traddr": "10.0.0.2", 00:20:16.705 "adrfam": "ipv4", 00:20:16.705 "trsvcid": "4420", 00:20:16.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.705 "hdgst": false, 00:20:16.705 "ddgst": false 00:20:16.705 }, 00:20:16.705 "method": "bdev_nvme_attach_controller" 00:20:16.705 }' 00:20:16.705 [2024-04-19 03:33:54.161747] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:16.705 [2024-04-19 03:33:54.161842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325181 ] 00:20:16.705 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.705 [2024-04-19 03:33:54.223139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.962 [2024-04-19 03:33:54.331003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.220 Running I/O for 15 seconds... 00:20:19.753 03:33:57 -- host/bdevperf.sh@33 -- # kill -9 324770 00:20:19.753 03:33:57 -- host/bdevperf.sh@35 -- # sleep 3 00:20:19.753 [2024-04-19 03:33:57.134027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.753 [2024-04-19 03:33:57.134521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.753 [2024-04-19 03:33:57.134536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.134976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.134993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.754 [2024-04-19 03:33:57.135491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.754 [2024-04-19 03:33:57.135521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.754 [2024-04-19 03:33:57.135552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.754 [2024-04-19 03:33:57.135582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.754 [2024-04-19 03:33:57.135616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.754 [2024-04-19 03:33:57.135646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.754 [2024-04-19 03:33:57.135695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.754 [2024-04-19 03:33:57.135712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.135979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.135995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.136978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.136995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.137011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.137028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.755 [2024-04-19 03:33:57.137043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.755 [2024-04-19 03:33:57.137059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.137970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.137985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.138017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.138113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.756 [2024-04-19 03:33:57.138144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.756 [2024-04-19 03:33:57.138182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.756 [2024-04-19 03:33:57.138214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.756 [2024-04-19 03:33:57.138246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.756 [2024-04-19 03:33:57.138278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.756 [2024-04-19 03:33:57.138310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.756 [2024-04-19 03:33:57.138343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.756 [2024-04-19 03:33:57.138359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bd100 is same with the state(5) to be set 00:20:19.756 [2024-04-19 03:33:57.138377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.756 [2024-04-19 03:33:57.138398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.757 [2024-04-19 03:33:57.138412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35112 len:8 PRP1 0x0 PRP2 0x0 00:20:19.757 [2024-04-19 03:33:57.138446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.757 [2024-04-19 03:33:57.138510] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15bd100 was disconnected and freed. reset controller. 00:20:19.757 [2024-04-19 03:33:57.142119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.142192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.142940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.143153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.143181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.143198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.143608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.143874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.143899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.143923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.147475] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.156273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.156703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.156911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.156940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.156958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.157195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.157451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.157476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.157492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.161039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.170260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.170688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.170868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.170897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.170914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.171152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.171405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.171430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.171446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.174995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.184213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.184656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.184863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.184891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.184908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.185146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.185399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.185424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.185439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.188997] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.198212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.198669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.198932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.198960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.198977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.199215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.199469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.199494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.199509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.203058] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.212063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.212474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.212689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.212715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.212731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.212986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.213228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.213252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.213267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.216826] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.226035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.226501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.226661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.226686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.226702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.226940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.227183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.227207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.227222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.230776] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.239993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.240449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.240645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.240673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.240691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.240928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.241170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.757 [2024-04-19 03:33:57.241194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.757 [2024-04-19 03:33:57.241209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.757 [2024-04-19 03:33:57.244766] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.757 [2024-04-19 03:33:57.253972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.757 [2024-04-19 03:33:57.254405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.254664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.757 [2024-04-19 03:33:57.254693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.757 [2024-04-19 03:33:57.254711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.757 [2024-04-19 03:33:57.254948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.757 [2024-04-19 03:33:57.255191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.758 [2024-04-19 03:33:57.255214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.758 [2024-04-19 03:33:57.255230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.758 [2024-04-19 03:33:57.258789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.758 [2024-04-19 03:33:57.267792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.758 [2024-04-19 03:33:57.268206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.758 [2024-04-19 03:33:57.268413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.758 [2024-04-19 03:33:57.268442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.758 [2024-04-19 03:33:57.268460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.758 [2024-04-19 03:33:57.268697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.758 [2024-04-19 03:33:57.268939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.758 [2024-04-19 03:33:57.268963] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.758 [2024-04-19 03:33:57.268979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.758 [2024-04-19 03:33:57.272541] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.758 [2024-04-19 03:33:57.281755] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.758 [2024-04-19 03:33:57.282142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.758 [2024-04-19 03:33:57.282327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.758 [2024-04-19 03:33:57.282355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.758 [2024-04-19 03:33:57.282373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.758 [2024-04-19 03:33:57.282622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.758 [2024-04-19 03:33:57.282864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.758 [2024-04-19 03:33:57.282888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.758 [2024-04-19 03:33:57.282903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.758 [2024-04-19 03:33:57.286461] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.758 [2024-04-19 03:33:57.295671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.758 [2024-04-19 03:33:57.296109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.758 [2024-04-19 03:33:57.296405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.758 [2024-04-19 03:33:57.296450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:19.758 [2024-04-19 03:33:57.296467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:19.758 [2024-04-19 03:33:57.296704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:19.758 [2024-04-19 03:33:57.296946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.758 [2024-04-19 03:33:57.296970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.758 [2024-04-19 03:33:57.296985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.758 [2024-04-19 03:33:57.300541] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.018 [2024-04-19 03:33:57.309543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.018 [2024-04-19 03:33:57.310085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.310403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.310449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.018 [2024-04-19 03:33:57.310467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.018 [2024-04-19 03:33:57.310705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.018 [2024-04-19 03:33:57.310947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.018 [2024-04-19 03:33:57.310970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.018 [2024-04-19 03:33:57.310986] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.018 [2024-04-19 03:33:57.314543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.018 [2024-04-19 03:33:57.323544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.018 [2024-04-19 03:33:57.323975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.324146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.324179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.018 [2024-04-19 03:33:57.324197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.018 [2024-04-19 03:33:57.324445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.018 [2024-04-19 03:33:57.324687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.018 [2024-04-19 03:33:57.324711] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.018 [2024-04-19 03:33:57.324726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.018 [2024-04-19 03:33:57.328271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.018 [2024-04-19 03:33:57.337550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.018 [2024-04-19 03:33:57.337992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.338199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.338227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.018 [2024-04-19 03:33:57.338244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.018 [2024-04-19 03:33:57.338493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.018 [2024-04-19 03:33:57.338736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.018 [2024-04-19 03:33:57.338760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.018 [2024-04-19 03:33:57.338776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.018 [2024-04-19 03:33:57.342323] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.018 [2024-04-19 03:33:57.351541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.018 [2024-04-19 03:33:57.351975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.352296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.352346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.018 [2024-04-19 03:33:57.352364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.018 [2024-04-19 03:33:57.352610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.018 [2024-04-19 03:33:57.352853] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.018 [2024-04-19 03:33:57.352877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.018 [2024-04-19 03:33:57.352893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.018 [2024-04-19 03:33:57.356450] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.018 [2024-04-19 03:33:57.365449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.018 [2024-04-19 03:33:57.365856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.366133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.018 [2024-04-19 03:33:57.366161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.018 [2024-04-19 03:33:57.366184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.018 [2024-04-19 03:33:57.366433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.018 [2024-04-19 03:33:57.366675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.366699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.366714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.370262] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.379262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.379705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.379972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.380001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.380019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.380256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.380508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.380532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.380547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.384092] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.393090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.393525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.393726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.393755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.393772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.394010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.394253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.394279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.394295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.397853] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.407067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.407488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.407677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.407706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.407724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.407967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.408209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.408234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.408250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.411806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.421054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.421474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.421632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.421661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.421690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.421928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.422171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.422196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.422212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.425771] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.435040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.435498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.435656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.435684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.435705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.435943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.436184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.436210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.436226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.439793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.449007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.449452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.449605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.449633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.449651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.449888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.450137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.450161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.450177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.453735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.462969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.463404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.463560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.463588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.463606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.463844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.464085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.464109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.464125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.467696] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.476926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.477333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.477560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.477590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.477609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.477846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.478087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.478111] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.478127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.481686] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.019 [2024-04-19 03:33:57.490932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.019 [2024-04-19 03:33:57.491339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.491527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.019 [2024-04-19 03:33:57.491556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.019 [2024-04-19 03:33:57.491574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.019 [2024-04-19 03:33:57.491810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.019 [2024-04-19 03:33:57.492051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.019 [2024-04-19 03:33:57.492081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.019 [2024-04-19 03:33:57.492098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.019 [2024-04-19 03:33:57.495657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.020 [2024-04-19 03:33:57.504887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.020 [2024-04-19 03:33:57.505322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.505494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.505523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.020 [2024-04-19 03:33:57.505541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.020 [2024-04-19 03:33:57.505778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.020 [2024-04-19 03:33:57.506020] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.020 [2024-04-19 03:33:57.506044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.020 [2024-04-19 03:33:57.506059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.020 [2024-04-19 03:33:57.509616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.020 [2024-04-19 03:33:57.518830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.020 [2024-04-19 03:33:57.519269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.519451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.519480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.020 [2024-04-19 03:33:57.519498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.020 [2024-04-19 03:33:57.519735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.020 [2024-04-19 03:33:57.519978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.020 [2024-04-19 03:33:57.520002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.020 [2024-04-19 03:33:57.520018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.020 [2024-04-19 03:33:57.523571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.020 [2024-04-19 03:33:57.532794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.020 [2024-04-19 03:33:57.533289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.533484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.533513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.020 [2024-04-19 03:33:57.533531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.020 [2024-04-19 03:33:57.533769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.020 [2024-04-19 03:33:57.534011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.020 [2024-04-19 03:33:57.534035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.020 [2024-04-19 03:33:57.534056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.020 [2024-04-19 03:33:57.537619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.020 [2024-04-19 03:33:57.546674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.020 [2024-04-19 03:33:57.547171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.547342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.547370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.020 [2024-04-19 03:33:57.547397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.020 [2024-04-19 03:33:57.547636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.020 [2024-04-19 03:33:57.547878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.020 [2024-04-19 03:33:57.547902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.020 [2024-04-19 03:33:57.547918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.020 [2024-04-19 03:33:57.551476] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.020 [2024-04-19 03:33:57.560688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.020 [2024-04-19 03:33:57.561096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.561248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.020 [2024-04-19 03:33:57.561276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.020 [2024-04-19 03:33:57.561294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.020 [2024-04-19 03:33:57.561542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.020 [2024-04-19 03:33:57.561785] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.020 [2024-04-19 03:33:57.561809] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.020 [2024-04-19 03:33:57.561824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.020 [2024-04-19 03:33:57.565397] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.020 [2024-04-19 03:33:57.574609] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.020 [2024-04-19 03:33:57.575041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.305 [2024-04-19 03:33:57.575274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.305 [2024-04-19 03:33:57.575321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.305 [2024-04-19 03:33:57.575339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.305 [2024-04-19 03:33:57.575586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.305 [2024-04-19 03:33:57.575829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.305 [2024-04-19 03:33:57.575853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.305 [2024-04-19 03:33:57.575868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.305 [2024-04-19 03:33:57.579428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.305 [2024-04-19 03:33:57.588439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.305 [2024-04-19 03:33:57.588847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.305 [2024-04-19 03:33:57.589049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.589077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.589095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.589332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.589584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.589609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.589625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.593171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.602390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.602881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.603057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.603085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.603103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.603340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.603590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.603616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.603632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.607176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.616393] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.616827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.616996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.617024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.617041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.617277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.617530] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.617555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.617571] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.621119] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.630342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.630760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.630911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.630939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.630956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.631193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.631446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.631470] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.631486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.635053] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.644267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.644705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.644952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.645016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.645034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.645271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.645524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.645549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.645564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.649108] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.658105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.658547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.658745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.658793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.658811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.659047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.659290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.659313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.659328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.662895] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.672104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.672530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.672712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.672740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.672757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.672995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.673236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.673260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.673276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.676835] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.686045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.686477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.686655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.686683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.686700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.686937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.687179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.687203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.687219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.690775] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.699977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.700415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.700594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.700622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.700639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.700877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.701118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.306 [2024-04-19 03:33:57.701142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.306 [2024-04-19 03:33:57.701157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.306 [2024-04-19 03:33:57.704715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.306 [2024-04-19 03:33:57.713913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.306 [2024-04-19 03:33:57.714361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.714542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.306 [2024-04-19 03:33:57.714576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.306 [2024-04-19 03:33:57.714594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.306 [2024-04-19 03:33:57.714831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.306 [2024-04-19 03:33:57.715073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.715097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.715112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.718681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.727887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.728316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.728522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.728551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.728569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.728806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.729048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.729072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.729088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.732644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.741852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.742362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.742584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.742630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.742647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.742885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.743126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.743150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.743166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.746721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.755781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.756193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.756408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.756438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.756462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.756702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.756944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.756967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.756983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.760539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.769768] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.770203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.770360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.770401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.770431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.770669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.770910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.770934] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.770950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.774506] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.783721] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.784127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.784303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.784331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.784349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.784596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.784838] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.784862] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.784877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.788435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.797655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.798060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.798293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.798338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.798355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.798609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.798852] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.798875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.798891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.802446] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.811659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.812089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.812281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.812309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.812326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.812575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.812818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.812842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.812857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.816414] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.825635] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.826065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.826265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.826311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.826329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.826578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.826820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.826844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.826860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.830416] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.307 [2024-04-19 03:33:57.839639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.307 [2024-04-19 03:33:57.840047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.840310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.307 [2024-04-19 03:33:57.840362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.307 [2024-04-19 03:33:57.840379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.307 [2024-04-19 03:33:57.840629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.307 [2024-04-19 03:33:57.840877] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.307 [2024-04-19 03:33:57.840902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.307 [2024-04-19 03:33:57.840917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.307 [2024-04-19 03:33:57.844472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.853476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.853881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.854079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.854124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.854142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.854379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.854633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.854658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.854673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.858217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.867447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.867992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.868281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.868309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.868326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.868574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.868817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.868840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.868856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.872421] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.881423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.881835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.882010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.882038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.882056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.882292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.882546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.882578] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.882594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.886140] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.895369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.895790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.896061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.896089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.896107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.896343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.896595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.896620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.896635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.900183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.909193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.909634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.909890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.909919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.909936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.910173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.910428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.910453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.910469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.914016] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.923023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.923481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.923660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.923689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.923706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.923944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.924185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.924209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.924233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.927793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.936998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.937430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.937631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.937659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.937677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.937914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.938156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.938180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.938195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.941754] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.568 [2024-04-19 03:33:57.950964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.568 [2024-04-19 03:33:57.951395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.951606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.568 [2024-04-19 03:33:57.951634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.568 [2024-04-19 03:33:57.951652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.568 [2024-04-19 03:33:57.951889] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.568 [2024-04-19 03:33:57.952130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.568 [2024-04-19 03:33:57.952154] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.568 [2024-04-19 03:33:57.952170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.568 [2024-04-19 03:33:57.955730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:57.964780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:57.965187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:57.965363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:57.965401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:57.965421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:57.965658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:57.965900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:57.965925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:57.965940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:57.969504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:57.978727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:57.979161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:57.979361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:57.979398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:57.979418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:57.979655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:57.979898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:57.979922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:57.979937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:57.983492] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:57.992706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:57.993118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:57.993297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:57.993325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:57.993343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:57.993589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:57.993832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:57.993856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:57.993872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:57.997428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.006680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.007115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.007371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.007409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:58.007428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:58.007666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:58.007909] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:58.007932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:58.007948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:58.011507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.020519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.020950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.021198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.021226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:58.021244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:58.021492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:58.021735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:58.021759] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:58.021775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:58.025323] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.034324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.034739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.034980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.035037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:58.035054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:58.035291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:58.035545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:58.035569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:58.035585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:58.039132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.048130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.048549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.048724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.048752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:58.048770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:58.049006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:58.049248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:58.049272] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:58.049287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:58.052849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.062067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.062507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.062708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.062736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:58.062754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:58.062992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:58.063234] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:58.063258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:58.063273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:58.066833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.076053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.076487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.076680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.569 [2024-04-19 03:33:58.076708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.569 [2024-04-19 03:33:58.076726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.569 [2024-04-19 03:33:58.076963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.569 [2024-04-19 03:33:58.077206] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.569 [2024-04-19 03:33:58.077229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.569 [2024-04-19 03:33:58.077245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.569 [2024-04-19 03:33:58.080802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.569 [2024-04-19 03:33:58.090012] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.569 [2024-04-19 03:33:58.090424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.570 [2024-04-19 03:33:58.090594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.570 [2024-04-19 03:33:58.090622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.570 [2024-04-19 03:33:58.090639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.570 [2024-04-19 03:33:58.090876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.570 [2024-04-19 03:33:58.091118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.570 [2024-04-19 03:33:58.091142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.570 [2024-04-19 03:33:58.091157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.570 [2024-04-19 03:33:58.094716] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.570 [2024-04-19 03:33:58.103928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.570 [2024-04-19 03:33:58.104357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.570 [2024-04-19 03:33:58.104533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.570 [2024-04-19 03:33:58.104567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.570 [2024-04-19 03:33:58.104586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.570 [2024-04-19 03:33:58.104823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.570 [2024-04-19 03:33:58.105065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.570 [2024-04-19 03:33:58.105089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.570 [2024-04-19 03:33:58.105104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.570 [2024-04-19 03:33:58.108667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.570 [2024-04-19 03:33:58.117883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.570 [2024-04-19 03:33:58.118291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.570 [2024-04-19 03:33:58.118499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.570 [2024-04-19 03:33:58.118529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.570 [2024-04-19 03:33:58.118547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.570 [2024-04-19 03:33:58.118784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.570 [2024-04-19 03:33:58.119026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.570 [2024-04-19 03:33:58.119050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.570 [2024-04-19 03:33:58.119065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.570 [2024-04-19 03:33:58.122625] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.131869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.132302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.132485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.132514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.132532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.132769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.133012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.133036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.133051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.136607] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.145832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.146241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.146444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.146474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.146498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.146738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.146982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.147006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.147021] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.150584] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.159803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.160239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.160415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.160444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.160462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.160700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.160943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.160968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.160983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.164543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.173982] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.174427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.174666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.174696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.174714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.174953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.175197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.175222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.175237] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.178797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.187810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.188371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.188658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.188717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.188736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.188980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.189223] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.189248] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.189263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.192849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.201652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.202096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.202268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.202297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.202315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.202565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.202808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.202833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.202850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.206406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.215626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.216060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.216252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.216280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.216299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.216550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.216795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.830 [2024-04-19 03:33:58.216821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.830 [2024-04-19 03:33:58.216838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.830 [2024-04-19 03:33:58.220396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.830 [2024-04-19 03:33:58.229611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.830 [2024-04-19 03:33:58.230040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.230184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.830 [2024-04-19 03:33:58.230212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.830 [2024-04-19 03:33:58.230230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.830 [2024-04-19 03:33:58.230483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.830 [2024-04-19 03:33:58.230732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.230758] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.230775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.234324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.243546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.243985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.244187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.244215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.244233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.244484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.244727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.244752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.244769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.248317] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.257543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.257949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.258263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.258314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.258332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.258580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.258823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.258849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.258865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.262422] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.271437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.271861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.272111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.272162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.272182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.272443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.272689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.272720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.272738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.276289] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.285300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.285751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.285958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.285989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.286008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.286247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.286507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.286534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.286550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.290104] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.299110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.299527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.299710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.299738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.299756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.299996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.300240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.300265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.300281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.303844] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.313057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.313491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.313745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.313799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.313818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.314056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.314301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.314327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.314349] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.317915] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.326917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.327350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.327561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.327591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.327610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.327849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.328093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.328119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.328135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.331695] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.340911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.341398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.341677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.341731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.341749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.341989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.342233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.342259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.342275] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.831 [2024-04-19 03:33:58.345837] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.831 [2024-04-19 03:33:58.354847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.831 [2024-04-19 03:33:58.355257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.355445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.831 [2024-04-19 03:33:58.355476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.831 [2024-04-19 03:33:58.355495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.831 [2024-04-19 03:33:58.355733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.831 [2024-04-19 03:33:58.355977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.831 [2024-04-19 03:33:58.356003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.831 [2024-04-19 03:33:58.356020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.832 [2024-04-19 03:33:58.359586] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.832 [2024-04-19 03:33:58.368805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.832 [2024-04-19 03:33:58.369255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.832 [2024-04-19 03:33:58.369433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.832 [2024-04-19 03:33:58.369463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.832 [2024-04-19 03:33:58.369482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.832 [2024-04-19 03:33:58.369720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.832 [2024-04-19 03:33:58.369962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.832 [2024-04-19 03:33:58.369987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.832 [2024-04-19 03:33:58.370004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.832 [2024-04-19 03:33:58.373571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.832 [2024-04-19 03:33:58.382830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.832 [2024-04-19 03:33:58.383268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.832 [2024-04-19 03:33:58.383457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.832 [2024-04-19 03:33:58.383488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:20.832 [2024-04-19 03:33:58.383507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:20.832 [2024-04-19 03:33:58.383746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:20.832 [2024-04-19 03:33:58.383990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.832 [2024-04-19 03:33:58.384016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.832 [2024-04-19 03:33:58.384033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.091 [2024-04-19 03:33:58.387595] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.396824] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.397307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.397479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.397510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.397528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.397767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.398011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.398036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.398053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.401609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.410830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.411263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.411440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.411470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.411488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.411726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.411970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.411996] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.412012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.415573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.424792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.425197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.425362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.425403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.425425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.425663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.425907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.425933] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.425950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.429507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.438726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.439162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.439345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.439374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.439403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.439642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.439886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.439912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.439928] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.443496] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.452728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.453183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.453340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.453370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.453400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.453640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.453890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.453916] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.453932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.457494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.466722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.467287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.467523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.467553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.467572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.467809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.468053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.468079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.468095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.471681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.480695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.481213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.481397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.481435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.481453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.481691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.481935] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.481961] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.481978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.485543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.494559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.495125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.495363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.495407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.495428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.495667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.495911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.495936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.495952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.499510] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.508532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.508977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.509189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.509219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.509238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.092 [2024-04-19 03:33:58.509489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.092 [2024-04-19 03:33:58.509734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.092 [2024-04-19 03:33:58.509760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.092 [2024-04-19 03:33:58.509777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.092 [2024-04-19 03:33:58.513324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.092 [2024-04-19 03:33:58.522339] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.092 [2024-04-19 03:33:58.522791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.522988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.092 [2024-04-19 03:33:58.523034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.092 [2024-04-19 03:33:58.523053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.523291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.523545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.523571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.523588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.527136] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.536143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.536540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.536721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.536750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.536773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.537013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.537257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.537282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.537298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.540859] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.550099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.550494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.550677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.550705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.550723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.550962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.551207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.551233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.551250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.554807] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.564015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.564449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.564603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.564634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.564652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.564891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.565135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.565161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.565178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.568741] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.578017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.578431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.578583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.578613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.578631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.578880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.579124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.579149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.579166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.582735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.592023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.592440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.592621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.592651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.592669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.592908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.593152] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.593177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.593194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.596767] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.606007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.606511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.606697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.606726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.606745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.606983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.607226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.607251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.607267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.610831] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.619869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.620300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.620511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.620541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.620560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.620797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.621047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.621072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.621089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.624659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.633919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.634356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.634564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.093 [2024-04-19 03:33:58.634594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.093 [2024-04-19 03:33:58.634612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.093 [2024-04-19 03:33:58.634851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.093 [2024-04-19 03:33:58.635094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.093 [2024-04-19 03:33:58.635131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.093 [2024-04-19 03:33:58.635148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.093 [2024-04-19 03:33:58.638714] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.093 [2024-04-19 03:33:58.647943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.093 [2024-04-19 03:33:58.648394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.649262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.649297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.649316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.649573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.649817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.649843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.649859] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.653425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.661819] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.662258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.662455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.662487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.662506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.662745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.662988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.663019] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.663037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.666598] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.675820] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.676264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.676472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.676503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.676522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.676760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.677003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.677028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.677045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.680605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.689837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.690223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.690431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.690461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.690480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.690718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.690962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.690988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.691004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.694559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.703777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.704220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.704371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.704437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.704456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.704694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.704936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.704962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.704985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.708545] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.717771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.718183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.718360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.718400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.718429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.718667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.718912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.718937] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.718953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.722516] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.731758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.732199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.732376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.732414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.732439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.732678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.732921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.354 [2024-04-19 03:33:58.732947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.354 [2024-04-19 03:33:58.732963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.354 [2024-04-19 03:33:58.736526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.354 [2024-04-19 03:33:58.745766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.354 [2024-04-19 03:33:58.746200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.746379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.354 [2024-04-19 03:33:58.746419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.354 [2024-04-19 03:33:58.746438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.354 [2024-04-19 03:33:58.746676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.354 [2024-04-19 03:33:58.746919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.746944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.746961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.750527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.759753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.760254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.760463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.760494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.760513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.760751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.760995] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.761020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.761036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.764596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.773624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.774185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.774370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.774410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.774438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.774676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.774919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.774944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.774960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.778522] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.787538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.787983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.788191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.788218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.788236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.788486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.788729] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.788755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.788772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.792354] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.801429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.801847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.802058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.802089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.802107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.802347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.802602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.802628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.802644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.806194] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.815425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.815926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.816162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.816212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.816232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.816482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.816728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.816753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.816769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.820323] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.829351] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.829797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.830000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.830030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.830048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.830286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.830543] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.830569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.830585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.834135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.843361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.843823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.844024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.844052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.844071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.844308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.844563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.844590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.844606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.848156] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.857389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.857823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.858029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.858059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.858077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.858316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.858574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.858599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.858616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.862165] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.355 [2024-04-19 03:33:58.871387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.355 [2024-04-19 03:33:58.871823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.872030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.355 [2024-04-19 03:33:58.872060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.355 [2024-04-19 03:33:58.872078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.355 [2024-04-19 03:33:58.872317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.355 [2024-04-19 03:33:58.872571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.355 [2024-04-19 03:33:58.872598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.355 [2024-04-19 03:33:58.872615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.355 [2024-04-19 03:33:58.876166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.356 [2024-04-19 03:33:58.885376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.356 [2024-04-19 03:33:58.885816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.356 [2024-04-19 03:33:58.885996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.356 [2024-04-19 03:33:58.886038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.356 [2024-04-19 03:33:58.886057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.356 [2024-04-19 03:33:58.886296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.356 [2024-04-19 03:33:58.886552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.356 [2024-04-19 03:33:58.886579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.356 [2024-04-19 03:33:58.886596] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.356 [2024-04-19 03:33:58.890142] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.356 [2024-04-19 03:33:58.899372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.356 [2024-04-19 03:33:58.899810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.356 [2024-04-19 03:33:58.899989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.356 [2024-04-19 03:33:58.900017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.356 [2024-04-19 03:33:58.900034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.356 [2024-04-19 03:33:58.900273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.356 [2024-04-19 03:33:58.900527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.356 [2024-04-19 03:33:58.900554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.356 [2024-04-19 03:33:58.900571] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.356 [2024-04-19 03:33:58.904122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.615 [2024-04-19 03:33:58.913336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.615 [2024-04-19 03:33:58.913779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.615 [2024-04-19 03:33:58.913931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.615 [2024-04-19 03:33:58.913959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.615 [2024-04-19 03:33:58.913977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.615 [2024-04-19 03:33:58.914215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.615 [2024-04-19 03:33:58.914470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.914497] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.914514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:58.918066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:58.927281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:58.927698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.927903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.927933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:58.927957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:58.928196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:58.928451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.928478] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.928495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:58.932043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:58.941257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:58.941697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.941875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.941903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:58.941921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:58.942158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:58.942413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.942439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.942456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:58.946003] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:58.955217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:58.955657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.955838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.955870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:58.955889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:58.956127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:58.956372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.956408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.956426] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:58.959976] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:58.969194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:58.969613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.969796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.969825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:58.969843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:58.970088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:58.970331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.970358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.970374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:58.973942] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:58.983153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:58.983595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.983801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.983831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:58.983850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:58.984088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:58.984331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.984358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.984374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:58.987936] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:58.997149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:58.997576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.997755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:58.997783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:58.997801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:58.998039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:58.998283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:58.998309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:58.998325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:59.001948] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:59.011158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:59.011599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:59.011778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:59.011808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:59.011827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:59.012066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:59.012315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:59.012342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:59.012358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:59.015918] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:59.025327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:59.025750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:59.025929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:59.025957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:59.025975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:59.026213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:59.026468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:59.026494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:59.026510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:59.030059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:59.039266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:59.039709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:59.039889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.616 [2024-04-19 03:33:59.039919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.616 [2024-04-19 03:33:59.039938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.616 [2024-04-19 03:33:59.040177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.616 [2024-04-19 03:33:59.040432] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.616 [2024-04-19 03:33:59.040458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.616 [2024-04-19 03:33:59.040475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.616 [2024-04-19 03:33:59.044023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.616 [2024-04-19 03:33:59.053234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.616 [2024-04-19 03:33:59.053671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.053855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.053887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.053906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.054144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.054398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.054437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.054455] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.058005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.067217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.067635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.067782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.067813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.067832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.068070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.068314] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.068340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.068358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.071923] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.081140] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.081558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.081734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.081762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.081780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.082019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.082263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.082288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.082304] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.085861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.095068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.095505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.095653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.095681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.095699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.095937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.096180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.096205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.096227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.099787] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.108998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.109430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.109619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.109647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.109665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.109904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.110149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.110175] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.110191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.113750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.122960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.123371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.123581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.123610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.123629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.123867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.124111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.124137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.124153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.127709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.136917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.137361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.137541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.137570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.137588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.137827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.138069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.138096] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.138112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.141678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.150892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.151311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.151510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.151539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.151557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.151795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.152039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.152065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.152081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.155659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.617 [2024-04-19 03:33:59.164870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.617 [2024-04-19 03:33:59.165303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.165484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.617 [2024-04-19 03:33:59.165514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.617 [2024-04-19 03:33:59.165532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.617 [2024-04-19 03:33:59.165772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.617 [2024-04-19 03:33:59.166017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.617 [2024-04-19 03:33:59.166042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.617 [2024-04-19 03:33:59.166059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.617 [2024-04-19 03:33:59.169618] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.877 [2024-04-19 03:33:59.178839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.877 [2024-04-19 03:33:59.179278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.877 [2024-04-19 03:33:59.179486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.877 [2024-04-19 03:33:59.179516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.179534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.179773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.180018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.180043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.180060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.183618] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.192842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.193248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.193456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.193487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.193506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.193745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.193989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.194015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.194031] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.197587] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.206840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.207291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.207473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.207504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.207522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.207762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.208006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.208031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.208048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.211167] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.220001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.220408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.220573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.220599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.220615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.220855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.221065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.221086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.221099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.224059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.233253] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.233680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.233912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.233937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.233954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.234205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.234458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.234481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.234495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.237454] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.246507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.246920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.247119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.247145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.247162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.247427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.247655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.247691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.247706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.250663] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.259749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.260156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.260320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.260346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.260362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.260601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.260839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.260861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.260874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.263814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.272987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.273347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.273559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.273585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.273602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.273868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.274064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.274085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.274098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.277083] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.286279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.286735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.286931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.286956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.286973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.287237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.287472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.878 [2024-04-19 03:33:59.287495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.878 [2024-04-19 03:33:59.287510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.878 [2024-04-19 03:33:59.290468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.878 [2024-04-19 03:33:59.299497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.878 [2024-04-19 03:33:59.299909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.300076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.878 [2024-04-19 03:33:59.300101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.878 [2024-04-19 03:33:59.300118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.878 [2024-04-19 03:33:59.300366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.878 [2024-04-19 03:33:59.300594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.300618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.300632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.303573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.312731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.313162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.313291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.313315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.313336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.313603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.313835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.313856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.313869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.316808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.326009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.326403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.326576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.326602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.326618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.326868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.327060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.327081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.327094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.330058] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.339250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.339670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.339887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.339913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.339929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.340185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.340436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.340461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.340477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.343448] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.352484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.352858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.353018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.353044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.353059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.353284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.353541] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.353564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.353578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.356569] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.365763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.366144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.366313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.366339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.366356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.366594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.366828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.366850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.366863] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.369798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.378990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.379335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.379549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.379577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.379594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.379846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.380040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.380061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.380074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.383008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.392228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.392694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.392831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.392856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.392873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.393115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.393326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.393363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.393378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.396817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.405595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.406024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.406238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.406266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.406283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.406563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.406799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.406821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.406834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.409868] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.418872] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.419232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.419405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.419433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.879 [2024-04-19 03:33:59.419450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.879 [2024-04-19 03:33:59.419694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.879 [2024-04-19 03:33:59.419904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.879 [2024-04-19 03:33:59.419926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.879 [2024-04-19 03:33:59.419939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.879 [2024-04-19 03:33:59.422877] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.879 [2024-04-19 03:33:59.432436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.879 [2024-04-19 03:33:59.432799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.879 [2024-04-19 03:33:59.432949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.880 [2024-04-19 03:33:59.432974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:21.880 [2024-04-19 03:33:59.432990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:21.880 [2024-04-19 03:33:59.433234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:21.880 [2024-04-19 03:33:59.433476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.880 [2024-04-19 03:33:59.433504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.880 [2024-04-19 03:33:59.433534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.139 [2024-04-19 03:33:59.436775] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.139 [2024-04-19 03:33:59.445613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.139 [2024-04-19 03:33:59.445991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.139 [2024-04-19 03:33:59.446128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.139 [2024-04-19 03:33:59.446155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.139 [2024-04-19 03:33:59.446172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.139 [2024-04-19 03:33:59.446441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.139 [2024-04-19 03:33:59.446662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.139 [2024-04-19 03:33:59.446699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.139 [2024-04-19 03:33:59.446714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.139 [2024-04-19 03:33:59.449655] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.139 [2024-04-19 03:33:59.458961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.139 [2024-04-19 03:33:59.459420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.139 [2024-04-19 03:33:59.459586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.139 [2024-04-19 03:33:59.459613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.139 [2024-04-19 03:33:59.459631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.139 [2024-04-19 03:33:59.459881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.460091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.460111] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.460124] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.463098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.472155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.472589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.472759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.472786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.472803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.473067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.473262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.473283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.473302] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.476274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.485354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.485788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.485979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.486004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.486021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.486273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.486495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.486517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.486531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.489523] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.498568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.498931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.499112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.499152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.499169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.499413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.499634] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.499657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.499671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.502623] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.511863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.512224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.512436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.512472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.512489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.512744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.512939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.512961] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.512974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.515920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.525150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.525588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.525776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.525801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.525818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.526071] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.526265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.526287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.526300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.529280] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.538338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.538738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.538930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.538957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.538974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.539227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.539448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.539486] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.539501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.542463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.551661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.552135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.552302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.552327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.552344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.552583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.552818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.552840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.552854] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.555792] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.564879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.565271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.565469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.565496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.565512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.565758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.140 [2024-04-19 03:33:59.565968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.140 [2024-04-19 03:33:59.565989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.140 [2024-04-19 03:33:59.566003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.140 [2024-04-19 03:33:59.568940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.140 [2024-04-19 03:33:59.578162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.140 [2024-04-19 03:33:59.578589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.578775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.140 [2024-04-19 03:33:59.578800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.140 [2024-04-19 03:33:59.578818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.140 [2024-04-19 03:33:59.579082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.579277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.579298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.579312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.582288] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.591305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.591693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.591894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.591919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.591936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.592192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.592425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.592447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.592476] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.595418] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.604617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.605034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.605199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.605226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.605258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.605532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.605754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.605776] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.605790] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.608778] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.617853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.618255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.618426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.618454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.618471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.618699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.618908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.618929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.618943] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.622101] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.631809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.632254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.632431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.632462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.632481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.632720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.632964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.632990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.633007] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.636571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.645793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.646329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.646541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.646570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.646588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.646827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.647071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.647097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.647114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.650681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.659696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.660129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.660284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.660314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.660332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.660582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.660827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.660852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.660869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.664441] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.673669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.674153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.674296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.674324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.674342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.674593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.674836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.674862] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.674878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.678441] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.141 [2024-04-19 03:33:59.687690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.141 [2024-04-19 03:33:59.688126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.688280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.141 [2024-04-19 03:33:59.688309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.141 [2024-04-19 03:33:59.688337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.141 [2024-04-19 03:33:59.688585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.141 [2024-04-19 03:33:59.688830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.141 [2024-04-19 03:33:59.688855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.141 [2024-04-19 03:33:59.688872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.141 [2024-04-19 03:33:59.692425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.402 [2024-04-19 03:33:59.701650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.402 [2024-04-19 03:33:59.702133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.702333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.702361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.402 [2024-04-19 03:33:59.702388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.402 [2024-04-19 03:33:59.702640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.402 [2024-04-19 03:33:59.702895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.402 [2024-04-19 03:33:59.702920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.402 [2024-04-19 03:33:59.702936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.402 [2024-04-19 03:33:59.706495] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.402 [2024-04-19 03:33:59.715521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.402 [2024-04-19 03:33:59.716023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.716268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.716297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.402 [2024-04-19 03:33:59.716316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.402 [2024-04-19 03:33:59.716566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.402 [2024-04-19 03:33:59.716810] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.402 [2024-04-19 03:33:59.716836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.402 [2024-04-19 03:33:59.716852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.402 [2024-04-19 03:33:59.720415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.402 [2024-04-19 03:33:59.729454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.402 [2024-04-19 03:33:59.729892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.730096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.730124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.402 [2024-04-19 03:33:59.730143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.402 [2024-04-19 03:33:59.730396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.402 [2024-04-19 03:33:59.730640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.402 [2024-04-19 03:33:59.730665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.402 [2024-04-19 03:33:59.730682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.402 [2024-04-19 03:33:59.734232] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.402 [2024-04-19 03:33:59.743458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.402 [2024-04-19 03:33:59.743893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.744128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.402 [2024-04-19 03:33:59.744174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.402 [2024-04-19 03:33:59.744193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.402 [2024-04-19 03:33:59.744441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.402 [2024-04-19 03:33:59.744684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.744709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.744726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.748273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.757292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.757726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.757938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.757967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.757994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.758233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.403 [2024-04-19 03:33:59.758487] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.758513] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.758529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.762083] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.771335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.771755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.771956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.772004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.772023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.772262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.403 [2024-04-19 03:33:59.772522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.772549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.772565] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.776124] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.785160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.785612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.785806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.785835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.785853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.786093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.403 [2024-04-19 03:33:59.786336] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.786361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.786377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.789943] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.799014] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.799451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.799609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.799638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.799657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.799894] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.403 [2024-04-19 03:33:59.800138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.800164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.800180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.803740] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.812970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.813404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.813610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.813639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.813658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.813896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.403 [2024-04-19 03:33:59.814139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.814169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.814186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.817751] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.826977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.827410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.827609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.827639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.827658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.827897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.403 [2024-04-19 03:33:59.828140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.403 [2024-04-19 03:33:59.828165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.403 [2024-04-19 03:33:59.828181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.403 [2024-04-19 03:33:59.831742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.403 [2024-04-19 03:33:59.840800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.403 [2024-04-19 03:33:59.841275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.841474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.403 [2024-04-19 03:33:59.841504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.403 [2024-04-19 03:33:59.841523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.403 [2024-04-19 03:33:59.841762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.842005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.842030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.842047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.845613] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.854629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.855127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.855358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.855396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.855416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.404 [2024-04-19 03:33:59.855661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.855905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.855931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.855953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.859514] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.868542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.869011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.869288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.869342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.869360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.404 [2024-04-19 03:33:59.869608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.869852] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.869878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.869894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.873463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.882505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.882939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.883161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.883191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.883210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.404 [2024-04-19 03:33:59.883461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.883705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.883731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.883748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.887297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.896320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.896766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.897000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.897048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.897067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.404 [2024-04-19 03:33:59.897305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.897560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.897586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.897603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.901158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.910176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.910592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.910766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.910795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.910814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.404 [2024-04-19 03:33:59.911053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.911295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.911321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.911337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.914902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.924124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.924569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.924814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.924843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.924862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.404 [2024-04-19 03:33:59.925100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.404 [2024-04-19 03:33:59.925343] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.404 [2024-04-19 03:33:59.925368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.404 [2024-04-19 03:33:59.925394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.404 [2024-04-19 03:33:59.928944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.404 [2024-04-19 03:33:59.937963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.404 [2024-04-19 03:33:59.938478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.938660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.404 [2024-04-19 03:33:59.938697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.404 [2024-04-19 03:33:59.938716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.405 [2024-04-19 03:33:59.938954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.405 [2024-04-19 03:33:59.939197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.405 [2024-04-19 03:33:59.939223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.405 [2024-04-19 03:33:59.939239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.405 [2024-04-19 03:33:59.942795] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.405 [2024-04-19 03:33:59.951830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.405 [2024-04-19 03:33:59.952239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.405 [2024-04-19 03:33:59.952441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.405 [2024-04-19 03:33:59.952471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.405 [2024-04-19 03:33:59.952489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.405 [2024-04-19 03:33:59.952740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.405 [2024-04-19 03:33:59.952983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.405 [2024-04-19 03:33:59.953009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.405 [2024-04-19 03:33:59.953025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.405 [2024-04-19 03:33:59.956581] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.666 [2024-04-19 03:33:59.965803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.666 [2024-04-19 03:33:59.966241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:33:59.966400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:33:59.966439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.666 [2024-04-19 03:33:59.966457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.666 [2024-04-19 03:33:59.966695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.666 [2024-04-19 03:33:59.966940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.666 [2024-04-19 03:33:59.966966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.666 [2024-04-19 03:33:59.966982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.666 [2024-04-19 03:33:59.970538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.666 [2024-04-19 03:33:59.979766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.666 [2024-04-19 03:33:59.980212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:33:59.980391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:33:59.980420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.666 [2024-04-19 03:33:59.980438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.666 [2024-04-19 03:33:59.980675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.666 [2024-04-19 03:33:59.980920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.666 [2024-04-19 03:33:59.980945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.666 [2024-04-19 03:33:59.980962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.666 [2024-04-19 03:33:59.984517] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.666 [2024-04-19 03:33:59.993774] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.666 [2024-04-19 03:33:59.994353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:33:59.994576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:33:59.994604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.666 [2024-04-19 03:33:59.994623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.666 [2024-04-19 03:33:59.994861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.666 [2024-04-19 03:33:59.995105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.666 [2024-04-19 03:33:59.995131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.666 [2024-04-19 03:33:59.995147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.666 [2024-04-19 03:33:59.998709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.666 [2024-04-19 03:34:00.007735] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.666 [2024-04-19 03:34:00.008149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:34:00.008321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.666 [2024-04-19 03:34:00.008351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.666 [2024-04-19 03:34:00.008370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.666 [2024-04-19 03:34:00.008620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.666 [2024-04-19 03:34:00.008864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.666 [2024-04-19 03:34:00.008890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.008907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.012467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.021563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.022075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.022256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.022285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.022303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.022556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.022802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.022828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.022846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.026410] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.035431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.036013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.036302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.036353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.036374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.036628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.036872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.036900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.036918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.040531] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.049341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.049789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.049973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.050003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.050022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.050260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.050519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.050545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.050562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.054114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.063340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.063784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.063997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.064026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.064045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.064284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.064539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.064565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.064582] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.068132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.077355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.077943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.078130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.078159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.078188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.078441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.078686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.078711] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.078727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.082275] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.091298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.091749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.091962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.091992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.092011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.092249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.092510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.092536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.092553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.096105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.105122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.105567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.105752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.105782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.105801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.106040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.106284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.106310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.106326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.109891] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 [2024-04-19 03:34:00.119105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.667 [2024-04-19 03:34:00.119524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.119742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.667 [2024-04-19 03:34:00.119771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.667 [2024-04-19 03:34:00.119789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.667 [2024-04-19 03:34:00.120034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.667 [2024-04-19 03:34:00.120279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.667 [2024-04-19 03:34:00.120305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.667 [2024-04-19 03:34:00.120322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.667 [2024-04-19 03:34:00.123883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 324770 Killed "${NVMF_APP[@]}" "$@" 00:20:22.667 03:34:00 -- host/bdevperf.sh@36 -- # tgt_init 00:20:22.667 03:34:00 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:22.667 03:34:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:22.667 03:34:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:22.667 03:34:00 -- common/autotest_common.sh@10 -- # set +x 00:20:22.667 03:34:00 -- nvmf/common.sh@470 -- # nvmfpid=325853 00:20:22.667 03:34:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:22.667 03:34:00 -- nvmf/common.sh@471 -- # waitforlisten 325853 00:20:22.667 03:34:00 -- common/autotest_common.sh@817 -- # '[' -z 325853 ']' 00:20:22.667 03:34:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.667 03:34:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.667 03:34:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.667 03:34:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.667 03:34:00 -- common/autotest_common.sh@10 -- # set +x 00:20:22.667 [2024-04-19 03:34:00.133106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.133525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.133695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.133725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.133745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.133984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.134227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.134252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.134268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 [2024-04-19 03:34:00.137822] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.668 [2024-04-19 03:34:00.147030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.147439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.147618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.147647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.147666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.147904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.148152] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.148177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.148193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 [2024-04-19 03:34:00.151750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.668 [2024-04-19 03:34:00.160967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.161377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.161543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.161572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.161591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.161829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.162072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.162097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.162115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 [2024-04-19 03:34:00.165677] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.668 [2024-04-19 03:34:00.174899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.175318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.175498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.175528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.175547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.175786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.176029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.176054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.176071] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 [2024-04-19 03:34:00.179229] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:22.668 [2024-04-19 03:34:00.179301] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.668 [2024-04-19 03:34:00.179661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.668 [2024-04-19 03:34:00.188868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.189285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.189459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.189489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.189508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.189754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.189998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.190023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.190039] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 [2024-04-19 03:34:00.193595] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.668 [2024-04-19 03:34:00.202813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.203291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.203497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.203537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.203556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.203794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.204036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.204060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.204077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 [2024-04-19 03:34:00.207640] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.668 [2024-04-19 03:34:00.216655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.668 [2024-04-19 03:34:00.217101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.217324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.668 [2024-04-19 03:34:00.217354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.668 [2024-04-19 03:34:00.217372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.668 [2024-04-19 03:34:00.217620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.668 [2024-04-19 03:34:00.217862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.668 [2024-04-19 03:34:00.217887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.668 [2024-04-19 03:34:00.217903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.668 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.668 [2024-04-19 03:34:00.221467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.230504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.230946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.231136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.231165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.231184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.231440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.231684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.231709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.231725] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.235466] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.244472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.244908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.245056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.245088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.245108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.245346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.245606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.245631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.245648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.249275] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.256570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:22.930 [2024-04-19 03:34:00.258296] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.258755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.258971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.259001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.259020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.259259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.259512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.259539] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.259567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.263187] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.272258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.272887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.273125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.273156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.273177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.273436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.273695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.273721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.273739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.277283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.286312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.286774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.286991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.287020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.287044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.287282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.287538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.287564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.287581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.291140] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.300154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.300607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.300790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.300821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.300840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.301078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.301322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.301347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.301363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.304933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.314142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.314605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.314822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.314851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.314870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.315112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.315356] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.315418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.930 [2024-04-19 03:34:00.315438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.930 [2024-04-19 03:34:00.318990] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.930 [2024-04-19 03:34:00.328058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.930 [2024-04-19 03:34:00.328679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.328930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.930 [2024-04-19 03:34:00.328972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.930 [2024-04-19 03:34:00.328993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.930 [2024-04-19 03:34:00.329253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.930 [2024-04-19 03:34:00.329528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.930 [2024-04-19 03:34:00.329554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.329574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.333149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.341956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.342417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.342613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.342642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.342661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.342906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.343149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.343175] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.343193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.346763] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.355974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.356436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.356598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.356627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.356646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.356891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.357135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.357161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.357201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.360762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.369986] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.370450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.370606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.370634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.370653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.370891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.371135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.371161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.371178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.374748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.378287] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.931 [2024-04-19 03:34:00.378334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.931 [2024-04-19 03:34:00.378350] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.931 [2024-04-19 03:34:00.378373] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.931 [2024-04-19 03:34:00.378395] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.931 [2024-04-19 03:34:00.378461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.931 [2024-04-19 03:34:00.378514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.931 [2024-04-19 03:34:00.378517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.931 [2024-04-19 03:34:00.383971] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.384493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.384688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.384729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.384749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.384993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.385239] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.385265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.385284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.388850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.397887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.398552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.398733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.398765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.398787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.399036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.399285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.399312] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.399332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.402910] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.411939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.412524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.412730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.412759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.412782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.413032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.413282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.413308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.413328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.416893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.425917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.426524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.426834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.426866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.426888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.427141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.427400] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.427428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.427448] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.431004] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.439819] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.440359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.440561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.440590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.440622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.440869] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.931 [2024-04-19 03:34:00.441115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.931 [2024-04-19 03:34:00.441141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.931 [2024-04-19 03:34:00.441160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.931 [2024-04-19 03:34:00.444717] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.931 [2024-04-19 03:34:00.453735] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.931 [2024-04-19 03:34:00.454403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.454642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.931 [2024-04-19 03:34:00.454673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.931 [2024-04-19 03:34:00.454694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.931 [2024-04-19 03:34:00.454944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.932 [2024-04-19 03:34:00.455193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.932 [2024-04-19 03:34:00.455219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.932 [2024-04-19 03:34:00.455239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.932 [2024-04-19 03:34:00.458898] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.932 [2024-04-19 03:34:00.467713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.932 [2024-04-19 03:34:00.468295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.932 [2024-04-19 03:34:00.468501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.932 [2024-04-19 03:34:00.468532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.932 [2024-04-19 03:34:00.468553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.932 [2024-04-19 03:34:00.468798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.932 [2024-04-19 03:34:00.469044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.932 [2024-04-19 03:34:00.469071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.932 [2024-04-19 03:34:00.469089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.932 [2024-04-19 03:34:00.472660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.932 [2024-04-19 03:34:00.481680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.932 [2024-04-19 03:34:00.482132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.932 [2024-04-19 03:34:00.482328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.932 [2024-04-19 03:34:00.482359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:22.932 [2024-04-19 03:34:00.482398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:22.932 [2024-04-19 03:34:00.482639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:22.932 [2024-04-19 03:34:00.482893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.932 [2024-04-19 03:34:00.482919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.932 [2024-04-19 03:34:00.482935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.191 [2024-04-19 03:34:00.486494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.191 [2024-04-19 03:34:00.495495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.191 [2024-04-19 03:34:00.495933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.496112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.496141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.191 [2024-04-19 03:34:00.496159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.191 [2024-04-19 03:34:00.496408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.191 [2024-04-19 03:34:00.496659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.191 [2024-04-19 03:34:00.496685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.191 [2024-04-19 03:34:00.496701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.191 [2024-04-19 03:34:00.500244] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.191 [2024-04-19 03:34:00.509464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.191 [2024-04-19 03:34:00.509874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.510044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.510072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.191 [2024-04-19 03:34:00.510090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.191 [2024-04-19 03:34:00.510329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.191 [2024-04-19 03:34:00.510593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.191 [2024-04-19 03:34:00.510619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.191 [2024-04-19 03:34:00.510644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.191 [2024-04-19 03:34:00.514187] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.191 [2024-04-19 03:34:00.523396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.191 [2024-04-19 03:34:00.523808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.524018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.524046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.191 [2024-04-19 03:34:00.524065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.191 [2024-04-19 03:34:00.524310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.191 [2024-04-19 03:34:00.524565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.191 [2024-04-19 03:34:00.524591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.191 [2024-04-19 03:34:00.524607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.191 [2024-04-19 03:34:00.528154] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.191 [2024-04-19 03:34:00.537363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.191 [2024-04-19 03:34:00.537807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.537993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.191 [2024-04-19 03:34:00.538021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.191 [2024-04-19 03:34:00.538040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.191 [2024-04-19 03:34:00.538279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.191 [2024-04-19 03:34:00.538535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.538561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.538579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.542125] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.551332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.551769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.551982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.552012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.552030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.552270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.192 [2024-04-19 03:34:00.552524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.552575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.552592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.556143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.565149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.565570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.565749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.565778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.565795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.566034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.192 [2024-04-19 03:34:00.566284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.566311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.566328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.569882] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.579093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.579514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.579687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.579716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.579734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.579972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.192 [2024-04-19 03:34:00.580216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.580242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.580259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.583814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.593016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.593453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.593656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.593685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.593703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.593942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.192 [2024-04-19 03:34:00.594186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.594211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.594227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.597784] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.606991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.607436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.607645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.607675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.607694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.607933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.192 [2024-04-19 03:34:00.608177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.608208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.608225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.611780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.620988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.621400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.621551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.621581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.621600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.621839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.192 [2024-04-19 03:34:00.622083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.192 [2024-04-19 03:34:00.622109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.192 [2024-04-19 03:34:00.622125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.192 [2024-04-19 03:34:00.625716] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.192 [2024-04-19 03:34:00.634536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.192 [2024-04-19 03:34:00.634963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.635141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.192 [2024-04-19 03:34:00.635169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.192 [2024-04-19 03:34:00.635187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.192 [2024-04-19 03:34:00.635444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.635663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.635685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.635699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.638944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.648084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.648490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.648655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.648680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.648697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.648911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.649129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.649151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.649170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.652362] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.661692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.662105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.662290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.662316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.662331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.662556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.662789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.662810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.662824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.666161] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.675194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.675592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.675766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.675792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.675808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.676036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.676247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.676268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.676282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.679448] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.688771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.689162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.689303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.689328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.689344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.689566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.689796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.689818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.689832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.693010] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.702298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.702702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.702888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.702913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.702929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.703143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.703369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.703400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.703414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.706579] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.715891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.716262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.716450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.716477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.716493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.716707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.193 [2024-04-19 03:34:00.716934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.193 [2024-04-19 03:34:00.716954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.193 [2024-04-19 03:34:00.716968] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.193 [2024-04-19 03:34:00.720108] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.193 [2024-04-19 03:34:00.729453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.193 [2024-04-19 03:34:00.729847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.729977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.193 [2024-04-19 03:34:00.730002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.193 [2024-04-19 03:34:00.730018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.193 [2024-04-19 03:34:00.730247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.194 [2024-04-19 03:34:00.730466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.194 [2024-04-19 03:34:00.730487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.194 [2024-04-19 03:34:00.730501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.194 [2024-04-19 03:34:00.733669] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.194 [2024-04-19 03:34:00.743000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.194 [2024-04-19 03:34:00.743379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.194 [2024-04-19 03:34:00.743573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.194 [2024-04-19 03:34:00.743599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.194 [2024-04-19 03:34:00.743615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.194 [2024-04-19 03:34:00.743828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.194 [2024-04-19 03:34:00.744055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.194 [2024-04-19 03:34:00.744076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.194 [2024-04-19 03:34:00.744089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.194 [2024-04-19 03:34:00.747322] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.756655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.757012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.757194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.757219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.757235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.757473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.757686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.757706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.757720] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.453 [2024-04-19 03:34:00.760911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.770060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.770443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.770603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.770629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.770645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.770859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.771086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.771107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.771120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.453 [2024-04-19 03:34:00.774270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.783582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.783972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.784126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.784152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.784168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.784390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.784609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.784630] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.784644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.453 [2024-04-19 03:34:00.787803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.796975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.797391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.797548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.797573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.797589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.797802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.798030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.798051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.798064] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.453 [2024-04-19 03:34:00.801221] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.810579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.810950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.811118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.811142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.811157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.811394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.811629] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.811651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.811665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.453 [2024-04-19 03:34:00.814876] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.824217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.824611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.824775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.824807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.824824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.825052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.825263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.825284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.825297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.453 [2024-04-19 03:34:00.828544] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.453 [2024-04-19 03:34:00.837707] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.453 [2024-04-19 03:34:00.838089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.838246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.453 [2024-04-19 03:34:00.838272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.453 [2024-04-19 03:34:00.838288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.453 [2024-04-19 03:34:00.838511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.453 [2024-04-19 03:34:00.838737] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.453 [2024-04-19 03:34:00.838765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.453 [2024-04-19 03:34:00.838779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.841967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.851119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.851543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.851692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.851717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.851733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.851961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.852171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.852192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.852206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.855328] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.864555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.864936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.865093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.865118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.865145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.865374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.865600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.865621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.865635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.868817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.878051] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.878445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.878632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.878659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.878675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.878888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.879115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.879137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.879150] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.882408] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.891528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.891924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.892080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.892105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.892121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.892347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.892566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.892588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.892601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.895780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.904925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.905333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.905490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.905516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.905532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.905751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.905969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.905990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.906005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.909209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.918437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.918870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.919003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.919027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.919042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.919269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.919515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.919538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.919552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.922773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.931911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.932294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.932455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.932482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.932498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.932711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.932938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.932959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.932972] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.936175] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.945494] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.945832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.946018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.946043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.946059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.946273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.946515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.946537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.946550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.949734] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.958891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.959317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.959503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.959529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.454 [2024-04-19 03:34:00.959545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.454 [2024-04-19 03:34:00.959759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.454 [2024-04-19 03:34:00.959986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.454 [2024-04-19 03:34:00.960007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.454 [2024-04-19 03:34:00.960020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.454 [2024-04-19 03:34:00.963171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.454 [2024-04-19 03:34:00.972328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.454 [2024-04-19 03:34:00.972713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.972898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.454 [2024-04-19 03:34:00.972923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.455 [2024-04-19 03:34:00.972939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.455 [2024-04-19 03:34:00.973152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.455 [2024-04-19 03:34:00.973408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.455 [2024-04-19 03:34:00.973430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.455 [2024-04-19 03:34:00.973445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.455 [2024-04-19 03:34:00.976613] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.455 [2024-04-19 03:34:00.985762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.455 [2024-04-19 03:34:00.986173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.455 [2024-04-19 03:34:00.986327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.455 [2024-04-19 03:34:00.986352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.455 [2024-04-19 03:34:00.986369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.455 [2024-04-19 03:34:00.986592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.455 [2024-04-19 03:34:00.986823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.455 [2024-04-19 03:34:00.986850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.455 [2024-04-19 03:34:00.986864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.455 [2024-04-19 03:34:00.990050] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.455 [2024-04-19 03:34:00.999156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.455 [2024-04-19 03:34:00.999578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.455 [2024-04-19 03:34:00.999718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.455 [2024-04-19 03:34:00.999744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.455 [2024-04-19 03:34:00.999760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.455 [2024-04-19 03:34:00.999990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.455 [2024-04-19 03:34:01.000201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.455 [2024-04-19 03:34:01.000222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.455 [2024-04-19 03:34:01.000235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.455 [2024-04-19 03:34:01.003416] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.714 [2024-04-19 03:34:01.012851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.714 [2024-04-19 03:34:01.013211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.714 [2024-04-19 03:34:01.013351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.714 [2024-04-19 03:34:01.013377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.714 [2024-04-19 03:34:01.013402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.714 [2024-04-19 03:34:01.013617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.714 [2024-04-19 03:34:01.013835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.714 [2024-04-19 03:34:01.013857] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.714 [2024-04-19 03:34:01.013872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.714 [2024-04-19 03:34:01.017117] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.714 [2024-04-19 03:34:01.026464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.714 [2024-04-19 03:34:01.026831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.714 [2024-04-19 03:34:01.026991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.714 [2024-04-19 03:34:01.027017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.714 [2024-04-19 03:34:01.027033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.714 [2024-04-19 03:34:01.027247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.714 [2024-04-19 03:34:01.027503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.714 [2024-04-19 03:34:01.027526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.027545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.030797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.039863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.040279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.040416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.040443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.040458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.040672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.040899] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.040920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.040934] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.044119] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.053266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.053707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.053881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.053906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.053922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.054150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.054362] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.054415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.054432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.057629] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.066828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.067223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.067353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.067378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.067403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.067617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.067835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.067856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.067871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.071120] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.080496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.080840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.080996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.081021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.081037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.081250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.081479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.081502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.081516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.084752] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.094088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.094471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.094620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.094646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.094663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.094891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.095102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.095123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.095136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.098288] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.107629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.108021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.108160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.108186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.108202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.108437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.108650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.108671] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.108684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.111913] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.121024] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.121398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.121562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.121588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.121604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.121818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.122044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.122065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.122079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.125212] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 03:34:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.715 03:34:01 -- common/autotest_common.sh@850 -- # return 0 00:20:23.715 03:34:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.715 03:34:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.715 03:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.715 [2024-04-19 03:34:01.134587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.135009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.135176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.135201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.135217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.135440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.135659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.135703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.135717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.138958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 03:34:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.715 [2024-04-19 03:34:01.148166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 03:34:01 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.715 03:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.715 03:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.715 [2024-04-19 03:34:01.148545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.148696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.148723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.148739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.148981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.149186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.149212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.149226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.152417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.153458] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.715 03:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.715 03:34:01 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.715 03:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.715 03:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.715 [2024-04-19 03:34:01.161657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.162072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.162241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.162268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.162284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.162508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.162727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.162757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.162771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.165995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.175139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.175510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.175679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.175705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.175721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.175968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.176182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.176203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.176216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.179393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 [2024-04-19 03:34:01.188690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.189270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.189498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.189526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.189546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.189797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.190017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.190038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.190054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.193176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 Malloc0 00:20:23.715 03:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.715 03:34:01 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.715 03:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.715 03:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.715 [2024-04-19 03:34:01.202196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.202657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.202852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.202879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.202896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.203137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.203344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.203365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.203379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 03:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.715 03:34:01 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:23.715 03:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.715 03:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.715 [2024-04-19 03:34:01.206589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 03:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.715 03:34:01 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.715 03:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.715 03:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.715 [2024-04-19 03:34:01.215725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.715 [2024-04-19 03:34:01.216139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.216307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.715 [2024-04-19 03:34:01.216333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138d170 with addr=10.0.0.2, port=4420 00:20:23.715 [2024-04-19 03:34:01.216350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:20:23.715 [2024-04-19 03:34:01.216589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138d170 (9): Bad file descriptor 00:20:23.715 [2024-04-19 03:34:01.216814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.715 [2024-04-19 03:34:01.216836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.715 [2024-04-19 03:34:01.216850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.715 [2024-04-19 03:34:01.217129] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.715 [2024-04-19 03:34:01.219993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.715 03:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.715 03:34:01 -- host/bdevperf.sh@38 -- # wait 325181 00:20:23.715 [2024-04-19 03:34:01.229092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.973 [2024-04-19 03:34:01.305478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.947 00:20:33.947 Latency(us) 00:20:33.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.947 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.947 Verification LBA range: start 0x0 length 0x4000 00:20:33.947 Nvme1n1 : 15.01 6109.97 23.87 10467.80 0.00 7696.66 831.34 21942.42 00:20:33.947 =================================================================================================================== 00:20:33.947 Total : 6109.97 23.87 10467.80 0.00 7696.66 831.34 21942.42 00:20:33.947 03:34:09 -- host/bdevperf.sh@39 -- # sync 00:20:33.947 03:34:09 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.947 03:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.947 03:34:09 -- common/autotest_common.sh@10 -- # set +x 00:20:33.947 03:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.947 03:34:09 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:33.947 03:34:09 -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:33.947 03:34:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:33.947 03:34:09 -- nvmf/common.sh@117 -- # sync 00:20:33.947 03:34:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.947 03:34:09 -- nvmf/common.sh@120 -- # set +e 00:20:33.947 03:34:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.947 03:34:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.947 rmmod nvme_tcp 00:20:33.947 rmmod nvme_fabrics 00:20:33.947 rmmod nvme_keyring 00:20:33.947 03:34:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.947 03:34:09 -- nvmf/common.sh@124 -- # set -e 00:20:33.947 03:34:09 -- nvmf/common.sh@125 -- # return 0 00:20:33.947 03:34:09 -- nvmf/common.sh@478 -- # '[' -n 325853 ']' 00:20:33.947 03:34:09 -- nvmf/common.sh@479 -- # killprocess 325853 00:20:33.947 03:34:09 -- common/autotest_common.sh@936 -- # '[' -z 325853 ']' 00:20:33.947 03:34:09 -- common/autotest_common.sh@940 -- # kill -0 325853 00:20:33.947 03:34:09 -- common/autotest_common.sh@941 -- # uname 00:20:33.947 03:34:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.947 03:34:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 325853 00:20:33.947 03:34:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:33.947 03:34:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:33.947 03:34:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 325853' 00:20:33.947 killing process with pid 325853 00:20:33.947 03:34:10 -- common/autotest_common.sh@955 -- # kill 325853 00:20:33.947 03:34:10 -- common/autotest_common.sh@960 -- # wait 325853 00:20:33.947 03:34:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:33.947 03:34:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:33.947 03:34:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:33.947 03:34:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.947 03:34:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.947 03:34:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.947 03:34:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.947 03:34:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.882 03:34:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.882 00:20:34.882 real 0m23.316s 00:20:34.882 user 1m2.343s 00:20:34.882 sys 0m4.492s 00:20:34.882 03:34:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.882 03:34:12 -- common/autotest_common.sh@10 -- # set +x 00:20:34.882 ************************************ 00:20:34.882 END TEST nvmf_bdevperf 00:20:34.882 ************************************ 00:20:34.882 03:34:12 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:34.882 03:34:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:34.882 03:34:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.882 03:34:12 -- common/autotest_common.sh@10 -- # set +x 00:20:35.142 ************************************ 00:20:35.142 START TEST nvmf_target_disconnect 00:20:35.142 ************************************ 00:20:35.142 03:34:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:35.142 * Looking for test storage... 00:20:35.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.142 03:34:12 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.142 03:34:12 -- nvmf/common.sh@7 -- # uname -s 00:20:35.142 03:34:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.142 03:34:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.142 03:34:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.142 03:34:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.142 03:34:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.142 03:34:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.142 03:34:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.142 03:34:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.142 03:34:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.142 03:34:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.142 03:34:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.142 03:34:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.142 03:34:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.142 03:34:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.142 03:34:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.142 03:34:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.142 03:34:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.142 03:34:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.142 03:34:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.142 03:34:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.142 03:34:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.142 03:34:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.142 03:34:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.142 03:34:12 -- paths/export.sh@5 -- # export PATH 00:20:35.142 03:34:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.142 03:34:12 -- nvmf/common.sh@47 -- # : 0 00:20:35.142 03:34:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.142 03:34:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.142 03:34:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.142 03:34:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.142 03:34:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.142 03:34:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.142 03:34:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.142 03:34:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.142 03:34:12 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:35.142 03:34:12 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:35.142 03:34:12 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:35.142 03:34:12 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:20:35.142 03:34:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:35.142 03:34:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.142 03:34:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:35.142 03:34:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:35.142 03:34:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:35.142 03:34:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.142 03:34:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.142 03:34:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.142 03:34:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:35.142 03:34:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:35.142 03:34:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.142 03:34:12 -- common/autotest_common.sh@10 -- # set +x 00:20:37.045 03:34:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.045 03:34:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.045 03:34:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.045 03:34:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.045 03:34:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.045 03:34:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.045 03:34:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.045 03:34:14 -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.045 03:34:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.045 03:34:14 -- nvmf/common.sh@296 -- # e810=() 00:20:37.045 03:34:14 -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.045 03:34:14 -- nvmf/common.sh@297 -- # x722=() 00:20:37.045 03:34:14 -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.045 03:34:14 -- nvmf/common.sh@298 -- # mlx=() 00:20:37.045 03:34:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.045 03:34:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.045 03:34:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.045 03:34:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.045 03:34:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.045 03:34:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.045 03:34:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.045 03:34:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.046 03:34:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.046 03:34:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:37.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:37.046 03:34:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.046 03:34:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:37.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:37.046 03:34:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.046 03:34:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.046 03:34:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.046 03:34:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.046 03:34:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.046 03:34:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:37.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:37.046 03:34:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.046 03:34:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.046 03:34:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.046 03:34:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.046 03:34:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.046 03:34:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:37.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:37.046 03:34:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.046 03:34:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:37.046 03:34:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:37.046 03:34:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:37.046 03:34:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:37.046 03:34:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.046 03:34:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.046 03:34:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.046 03:34:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.046 03:34:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.046 03:34:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.046 03:34:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.046 03:34:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.046 03:34:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.046 03:34:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.046 03:34:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.046 03:34:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.046 03:34:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.046 03:34:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.046 03:34:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.046 03:34:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.046 03:34:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.304 03:34:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.304 03:34:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.304 03:34:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:20:37.304 00:20:37.304 --- 10.0.0.2 ping statistics --- 00:20:37.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.304 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:37.304 03:34:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:20:37.304 00:20:37.304 --- 10.0.0.1 ping statistics --- 00:20:37.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.304 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:37.304 03:34:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.304 03:34:14 -- nvmf/common.sh@411 -- # return 0 00:20:37.304 03:34:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:37.304 03:34:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.304 03:34:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:37.304 03:34:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:37.304 03:34:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.304 03:34:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:37.304 03:34:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:37.304 03:34:14 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:37.304 03:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.304 03:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.304 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:20:37.304 ************************************ 00:20:37.304 START TEST nvmf_target_disconnect_tc1 00:20:37.304 ************************************ 00:20:37.304 03:34:14 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:20:37.304 03:34:14 -- host/target_disconnect.sh@32 -- # set +e 00:20:37.304 03:34:14 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.304 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.304 [2024-04-19 03:34:14.845814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.304 [2024-04-19 03:34:14.846048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.304 [2024-04-19 03:34:14.846080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d3ad0 with addr=10.0.0.2, port=4420 00:20:37.304 [2024-04-19 03:34:14.846118] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:37.304 [2024-04-19 03:34:14.846142] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:37.304 [2024-04-19 03:34:14.846157] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:20:37.304 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:20:37.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:37.304 Initializing NVMe Controllers 00:20:37.304 03:34:14 -- host/target_disconnect.sh@33 -- # trap - ERR 00:20:37.304 03:34:14 -- host/target_disconnect.sh@33 -- # print_backtrace 00:20:37.304 03:34:14 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:20:37.304 03:34:14 -- common/autotest_common.sh@1139 -- # return 0 00:20:37.304 03:34:14 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:20:37.304 03:34:14 -- host/target_disconnect.sh@41 -- # set -e 00:20:37.304 00:20:37.304 real 0m0.092s 00:20:37.304 user 0m0.034s 00:20:37.304 sys 0m0.057s 00:20:37.304 03:34:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:37.304 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:20:37.304 ************************************ 00:20:37.304 END TEST nvmf_target_disconnect_tc1 00:20:37.304 ************************************ 00:20:37.563 03:34:14 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:37.563 03:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.563 03:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.563 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:20:37.563 ************************************ 00:20:37.563 START TEST nvmf_target_disconnect_tc2 00:20:37.563 ************************************ 00:20:37.563 03:34:14 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:20:37.563 03:34:14 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:20:37.563 03:34:14 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:37.563 03:34:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:37.563 03:34:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:37.563 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:20:37.563 03:34:14 -- nvmf/common.sh@470 -- # nvmfpid=329032 00:20:37.563 03:34:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:37.563 03:34:14 -- nvmf/common.sh@471 -- # waitforlisten 329032 00:20:37.563 03:34:14 -- common/autotest_common.sh@817 -- # '[' -z 329032 ']' 00:20:37.563 03:34:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.563 03:34:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.563 03:34:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.563 03:34:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.563 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:20:37.563 [2024-04-19 03:34:15.015274] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:37.563 [2024-04-19 03:34:15.015345] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.563 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.563 [2024-04-19 03:34:15.080116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.822 [2024-04-19 03:34:15.189634] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.822 [2024-04-19 03:34:15.189691] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.822 [2024-04-19 03:34:15.189712] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.822 [2024-04-19 03:34:15.189722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.822 [2024-04-19 03:34:15.189732] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.822 [2024-04-19 03:34:15.189951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.822 [2024-04-19 03:34:15.190009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.822 [2024-04-19 03:34:15.190078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:37.822 [2024-04-19 03:34:15.190081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.822 03:34:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.822 03:34:15 -- common/autotest_common.sh@850 -- # return 0 00:20:37.822 03:34:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:37.822 03:34:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:37.822 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:37.822 03:34:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.822 03:34:15 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.822 03:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.822 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:37.822 Malloc0 00:20:37.822 03:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.822 03:34:15 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:37.822 03:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.822 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:37.822 [2024-04-19 03:34:15.358894] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.822 03:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.822 03:34:15 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.822 03:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.822 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:37.822 03:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.822 03:34:15 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.822 03:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.822 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:38.081 03:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.081 03:34:15 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.081 03:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.081 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:38.081 [2024-04-19 03:34:15.387149] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.081 03:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.081 03:34:15 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.081 03:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.081 03:34:15 -- common/autotest_common.sh@10 -- # set +x 00:20:38.081 03:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.081 03:34:15 -- host/target_disconnect.sh@50 -- # reconnectpid=329053 00:20:38.081 03:34:15 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.081 03:34:15 -- host/target_disconnect.sh@52 -- # sleep 2 00:20:38.081 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.994 03:34:17 -- host/target_disconnect.sh@53 -- # kill -9 329032 00:20:39.994 03:34:17 -- host/target_disconnect.sh@55 -- # sleep 2 00:20:39.994 Read completed with error (sct=0, sc=8) 00:20:39.994 starting I/O failed 00:20:39.994 Read completed with error (sct=0, sc=8) 00:20:39.994 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 [2024-04-19 03:34:17.411197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 [2024-04-19 03:34:17.411533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Write completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 [2024-04-19 03:34:17.411848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.995 Read completed with error (sct=0, sc=8) 00:20:39.995 starting I/O failed 00:20:39.996 Write completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Write completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Write completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Read completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 Write completed with error (sct=0, sc=8) 00:20:39.996 starting I/O failed 00:20:39.996 [2024-04-19 03:34:17.412200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.996 [2024-04-19 03:34:17.412428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.412580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.412610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.412779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.412990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.413017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.413245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.413487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.413515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.413648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.413850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.413876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.414098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.414303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.414329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.414484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.414621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.414648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.414827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.415015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.415041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.415286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.415422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.415449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.415588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.415718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.415744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.415902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.416058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.416085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.416269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.416409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.416437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.416570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.416722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.416766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.416937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.417145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.417171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.417301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.417464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.417492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.417634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.417768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.417795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.417996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.418172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.418199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.418392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.418510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.418537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.418666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.418795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.418821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.418961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.419119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.419146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.419340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.419489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.419517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.419647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.419852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.419878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.420066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.420226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.420252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.420413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.420547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.420573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.420743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.420941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.420968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.421167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.421295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.421321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.996 qpair failed and we were unable to recover it. 00:20:39.996 [2024-04-19 03:34:17.421479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.421612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.996 [2024-04-19 03:34:17.421639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.421924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.422150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.422175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.422397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.422534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.422561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.422700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.422851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.422878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.423098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.423238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.423264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.423438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.423569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.423596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.423739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.423920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.423946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.424166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.424352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.424397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.424565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.424697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.424723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.424928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.425212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.425252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.425463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.425599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.425625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.425795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.426033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.426075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.426278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.426419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.426447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.426868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.426913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.427098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.427227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.427254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.427438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.427570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.427596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.427764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.427897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.427922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.428105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.428265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.428291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.428446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.428598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.428623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.428810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.429112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.429162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.429409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.429571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.429596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.429787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.429979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.430004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.430169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.430362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.430405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.430572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.430697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.430723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.430882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.431042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.431067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.431224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.431388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.431415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.431550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.431709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.431735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.431943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.432100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.432187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.432397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.432528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.432554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.997 qpair failed and we were unable to recover it. 00:20:39.997 [2024-04-19 03:34:17.432738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.997 [2024-04-19 03:34:17.432934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.432962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.433139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.433301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.433326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.433476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.433637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.433662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.433848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.433980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.434006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.434200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.434356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.434400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.434565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.434705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.434732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.434919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.435056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.435081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.435262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.435396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.435423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.435559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.435728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.435753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.435885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.436024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.436050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.436213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.436364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.436402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.436563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.436697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.436722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.436931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.437091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.437117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.437277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.437448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.437474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.437631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.437798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.437824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.437960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.438140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.438166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.438359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.438544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.438570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.438740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.439014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.439065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.439240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.439425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.439452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.439586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.439776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.439801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.439926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.440082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.440108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.440262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.440452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.440478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.440612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.440804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.440830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.440985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.441171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.441197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.441357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.441538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.441564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.441697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.441827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.441853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.442015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.442200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.442226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.442432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.442640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.442666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.442842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.442996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.443022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.998 qpair failed and we were unable to recover it. 00:20:39.998 [2024-04-19 03:34:17.443205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.998 [2024-04-19 03:34:17.443402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.443429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.443555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.443741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.443766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.443945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.444144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.444170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.444298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.444456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.444482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.444634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.444798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.444824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.444951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.445162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.445191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.445428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.445587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.445613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.445756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.445939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.445964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.446137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.446291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.446317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.446482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.446644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.446682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.446888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.447062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.447087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.447220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.447411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.447438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.447597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.447729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.447755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.447941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.448150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.448175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.448334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.448514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.448541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.448712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.448868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.448899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.449060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.449244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.449285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.449496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.449648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.449692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.449866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.449996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.450041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.450210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.450421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.450447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.450607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.450826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.450851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.450980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.451174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.451201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.451331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.451494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.451521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:39.999 qpair failed and we were unable to recover it. 00:20:39.999 [2024-04-19 03:34:17.451653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.999 [2024-04-19 03:34:17.451967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.452023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.452229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.452391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.452429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.452564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.452719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.452749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.452934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.453091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.453117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.453307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.453467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.453493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.453693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.453855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.453880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.454011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.454166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.454191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.454354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.454556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.454582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.454765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.454949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.454974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.455180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.455359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.455395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.455528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.455685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.455711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.455909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.456160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.456186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.456341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.456533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.456564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.456722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.456903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.456945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.457128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.457284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.457309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.457475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.457628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.457653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.457804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.457954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.457980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.458136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.458277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.458306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.458473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.458622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.458648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.458777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.458924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.458949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.459082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.459262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.459287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.459446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.459604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.459630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.459787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.459970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.459995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.460153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.460337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.460362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.460553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.460747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.460773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.460923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.461073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.461113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.461321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.461480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.461506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.461668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.461853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.461896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.462076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.462233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.000 [2024-04-19 03:34:17.462258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.000 qpair failed and we were unable to recover it. 00:20:40.000 [2024-04-19 03:34:17.462412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.462537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.462563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.462721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.462930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.462956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.463088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.463277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.463319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.463490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.463674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.463699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.463836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.464018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.464043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.464168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.464323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.464348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.464559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.464688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.464714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.464867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.465021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.465047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.465227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.465391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.465418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.465627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.465895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.465920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.466100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.466285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.466311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.466472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.466633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.466658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.466816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.467012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.467079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.467214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.467395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.467426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.467638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.467824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.467849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.467984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.468170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.468200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.468405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.468568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.468594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.468752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.468908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.468935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.469157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.469286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.469312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.469449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.469610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.469637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.469831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.470033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.470061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.470227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.470410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.470437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.470622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.470777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.470819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.470998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.471127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.471152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.471296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.471453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.471481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.471691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.471946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.471972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.472105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.472263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.472288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.472491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.472652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.472678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.001 [2024-04-19 03:34:17.472858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.473035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.001 [2024-04-19 03:34:17.473098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.001 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.473244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.473422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.473449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.473636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.473832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.473860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.474066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.474250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.474277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.474457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.474628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.474654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.474790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.474972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.475014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.475167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.475346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.475372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.475540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.475695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.475722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.475904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.476082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.476108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.476237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.476421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.476450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.476658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.476918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.476944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.477104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.477275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.477303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.477479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.477736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.477790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.477967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.478157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.478215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.478409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.478566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.478591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.478785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.478992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.479071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.479235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.479432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.479458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.479637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.479840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.479866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.480071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.480236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.480264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.480437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.480565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.480590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.480746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.481004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.481058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.481261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.481435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.481464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.481639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.481797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.481837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.482041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.482189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.482219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.482422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.482665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.482721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.482903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.483041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.483067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.483252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.483403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.483444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.483629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.483783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.483809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.483966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.484148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.484174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.002 [2024-04-19 03:34:17.484345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.484509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.002 [2024-04-19 03:34:17.484535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.002 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.484715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.484984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.485037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.485219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.485375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.485408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.485593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.485801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.485827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.485963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.486145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.486186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.486366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.486515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.486543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.486748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.486926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.486952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.487118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.487300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.487342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.487562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.487746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.487833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.488031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.488229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.488258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.488426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.488644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.488836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.489068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.489136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.489334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.489530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.489557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.489735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.489945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.489971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.490150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.490353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.490389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.490541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.490748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.490774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.490928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.491111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.491152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.491337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.491502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.491529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.491728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.491911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.491963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.492134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.492298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.492327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.492539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.492678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.492705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.492833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.492992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.493021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.493220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.493395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.493435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.493585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.493763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.493830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.494040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.494220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.494263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.494436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.494612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.494643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.494798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.494931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.494957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.495091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.495251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.495278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.495446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.495651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.495679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.003 [2024-04-19 03:34:17.495874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.496036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.003 [2024-04-19 03:34:17.496081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.003 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.496253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.496421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.496452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.496645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.496802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.496846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.497048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.497199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.497226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.497446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.497648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.497687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.497861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.498089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.498139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.498308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.498465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.498508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.498710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.498967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.499020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.499194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.499392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.499433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.499607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.499725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.499768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.499946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.500099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.500127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.500350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.500514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.500544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.500697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.500857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.500900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.501096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.501270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.501299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.501506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.501773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.501827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.502025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.502158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.502185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.502345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.502569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.502598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.502801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.502946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.502977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.503157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.503318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.503362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.503567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.503720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.503747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.503903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.504059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.504085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.504240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.504418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.504467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.504662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.504898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.504951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.505150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.505298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.505327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.505511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.505662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.004 [2024-04-19 03:34:17.505692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.004 qpair failed and we were unable to recover it. 00:20:40.004 [2024-04-19 03:34:17.505847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.506003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.506047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.506257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.506536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.506596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.506783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.506968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.506994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.507150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.507313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.507343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.507534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.507694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.507749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.507908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.508064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.508090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.508227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.508413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.508449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.508598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.508799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.508829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.509008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.509192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.509235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.509400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.509563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.509592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.509771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.510029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.510059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.510239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.510394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.510420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.510613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.510876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.510931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.511107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.511279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.511313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.511493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.511701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.511732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.511902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.512054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.512086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.512272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.512428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.512456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.512642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.512870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.512920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.513093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.513291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.513320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.513490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.513686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.513711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.513862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.513996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.514022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.514176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.514373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.514409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.514575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.514807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.514863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.515039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.515199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.515231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.515409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.515610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.515640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.515837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.516026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.516078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.516243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.516454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.516482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.516662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.516896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.516948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.517151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.517326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.005 [2024-04-19 03:34:17.517356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.005 qpair failed and we were unable to recover it. 00:20:40.005 [2024-04-19 03:34:17.517564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.517721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.517748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.517923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.518228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.518289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.518478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.518651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.518680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.518868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.519040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.519071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.519252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.519436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.519486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.519663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.519874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.519925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.520138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.520276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.520302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.520492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.520677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.520706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.520909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.521103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.521159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.521365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.521533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.521560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.521715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.521847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.521873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.522031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.522188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.522215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.522400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.522558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.522584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.522764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.522937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.522966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.523140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.523305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.523335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.523542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.523676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.523702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.523838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.524027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.524095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.524289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.524449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.524494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.524665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.524825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.524868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.525056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.525206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.525235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.525507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.525738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.525806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.526016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.526231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.526260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.526417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.526604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.526630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.526765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.526972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.527001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.527151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.527333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.527375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.527563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.527764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.527791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.527951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.528132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.528161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.528356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.528523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.528550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.528750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.528961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.528988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.006 qpair failed and we were unable to recover it. 00:20:40.006 [2024-04-19 03:34:17.529195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-19 03:34:17.529340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.529369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.529564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.529690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.529717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.529900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.530072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.530101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.530267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.530500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.530552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.530754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.530949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.531026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.531225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.531467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.531530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.531719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.531885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.531915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.532117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.532318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.532348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.532513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.532711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.532738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.532942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.533107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.533137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.533319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.533484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.533512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.533637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.533851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.533877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.534038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.534236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.534266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.534446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.534575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.534602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.534761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.534915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.534941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.535087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.535289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.535318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.535504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.535666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.535693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.535882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.536098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.536125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.536285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.536471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.536502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.536686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.536864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.536891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.537112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.537243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.537270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.537456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.537648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.537675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.537834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.538038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.538067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.538259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.538437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.538465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.538598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.538759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.538785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.538972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.539138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.539165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.539331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.539465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.539493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.539619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.539807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.539850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.539999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.540178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-19 03:34:17.540223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.007 qpair failed and we were unable to recover it. 00:20:40.007 [2024-04-19 03:34:17.540416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.540575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.540603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.540776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.540951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.540981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.541160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.541330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.541359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.541535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.541742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.541799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.541947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.542112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.542141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.542300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.542461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.542490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.542650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.542840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.542870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.543030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.543182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.543211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.543366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.543527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.543555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.543751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.544032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.544090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.544269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.544443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.544473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.544656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.544836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.544913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.545108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.545305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.545334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.545512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.545708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.545737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.545926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.546068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-19 03:34:17.546094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.008 qpair failed and we were unable to recover it. 00:20:40.008 [2024-04-19 03:34:17.546237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.546434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.546462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.546622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.546857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.546906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.547117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.547253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.547280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.547412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.547554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.547584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.547732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.547931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.547987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.548202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.548410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.548440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.548583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.548753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.548784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.548993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.549258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.549314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.549519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.549700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.549761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.549959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.550107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.280 [2024-04-19 03:34:17.550136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.280 qpair failed and we were unable to recover it. 00:20:40.280 [2024-04-19 03:34:17.550330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.550506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.550534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.550719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.550893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.550948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.551124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.551347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.551376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.551583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.551788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.551844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.552003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.552162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.552188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.552393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.552627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.552656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.552825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.553078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.553136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.553341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.553512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.553539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.553675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.553825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.553855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.554055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.554228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.554254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.554443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.554624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.554655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.554854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.555008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.555077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.555254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.555431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.555462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.555642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.555848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.555877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.556050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.556212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.556241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.556419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.556626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.556652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.556809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.557022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.557079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.557254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.557436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.557463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.557662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.557942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.557994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.558198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.558365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.558404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.558607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.558765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.558811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.559010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.559190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.559244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.559447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.559615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.559642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.559804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.559946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.559989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.560163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.560346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.560373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.560530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.560777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.560829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.561004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.561197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.561227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.561398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.561570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.561599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.281 [2024-04-19 03:34:17.561752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.561905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.281 [2024-04-19 03:34:17.561932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.281 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.562130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.562309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.562338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.562533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.562711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.562773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.562979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.563152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.563181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.563332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.563502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.563530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.563692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.563886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.563912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.564068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.564221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.564263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.564462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.564600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.564630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.564803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.564996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.565047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.565224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.565374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.565425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.565631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.565896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.565946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.566150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.566315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.566344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.566537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.566784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.566835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.567055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.567209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.567237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.567425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.567594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.567624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.567812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.567969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.568015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.568167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.568309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.568338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.568529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.568702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.568733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.568918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.569131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.569188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.569395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.569541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.569571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.569747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.569914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.569973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.570150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.570306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.570350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.570534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.570693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.570720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.570907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.571147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.571207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.571393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.571556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.571587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.571721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.571881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.571907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.572090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.572278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.572304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.572489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.572620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.572646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.572846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.573007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.573034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.282 [2024-04-19 03:34:17.573190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.573376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.282 [2024-04-19 03:34:17.573412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.282 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.573587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.573758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.573784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.573990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.574138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.574167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.574336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.574522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.574549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.574711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.574895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.574923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.575091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.575266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.575301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.575505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.575717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.575777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.575983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.576115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.576162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.576336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.576516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.576546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.576758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.576885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.576912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.577064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.577241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.577270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.577469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.577607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.577636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.577807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.577991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.578018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.578199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.578355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.578404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.578589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.578748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.578791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.578948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.579105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.579136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.579365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.579517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.579544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.579728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.579887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.579930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.580094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.580257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.580286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.580436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.580602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.580629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.580810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.580985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.581014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.581190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.581360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.581398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.581588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.581741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.581784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.582121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.582351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.582378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.582525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.582697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.582727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.582899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.583078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.583110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.583320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.583499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.583527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.583682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.584038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.584089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.584268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.584448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.584478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.283 qpair failed and we were unable to recover it. 00:20:40.283 [2024-04-19 03:34:17.584667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.584825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.283 [2024-04-19 03:34:17.584852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.585014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.585218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.585247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.585432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.585563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.585591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.585748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.585920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.586007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.586219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.586402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.586430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.586613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.586796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.586823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.586957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.587140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.587169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.587347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.587530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.587561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.587772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.587942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.587971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.588121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.588277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.588304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.588465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.588666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.588696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.588842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.588994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.589021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.589201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.589403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.589433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.589605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.589737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.589764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.589945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.590147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.590177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.590353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.590525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.590553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.590772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.590981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.591008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.591196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.591353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.591379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.591597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.591807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.591859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.592068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.592211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.592241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.592395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.592607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.592633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.592783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.592937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.592980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.593128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.593294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.593324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.593497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.593647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.593677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.593854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.594058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.594115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.594305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.594493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.594524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.594670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.594902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.594954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.595158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.595355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.595391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.595606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.595809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.595839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.284 qpair failed and we were unable to recover it. 00:20:40.284 [2024-04-19 03:34:17.595990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.284 [2024-04-19 03:34:17.596170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.596212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.596362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.596548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.596575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.596747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.596951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.597006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.597188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.597342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.597369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.597532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.597708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.597739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.598012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.598351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.598417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.598593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.598767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.598793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.598954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.599182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.599250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.599440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.599644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.599673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.599929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.600276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.600327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.600499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.600665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.600693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.600869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.601069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.601099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.601272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.601458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.601501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.601689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.601905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.601956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.602132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.602329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.602358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.602550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.602678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.602719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.602903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.603176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.603227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.603515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.603690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.603721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.603904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.604084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.604112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.604324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.604467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.604497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.604695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.604850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.604877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.605015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.605225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.605253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.605429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.605576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.605605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.285 [2024-04-19 03:34:17.605812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.606094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.285 [2024-04-19 03:34:17.606146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.285 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.606356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.606603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.606668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.606950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.607263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.607315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.607518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.607722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.607791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.607977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.608133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.608161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.608324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.608551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.608611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.608812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.609058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.609125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.609300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.609478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.609508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.609684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.609883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.609944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.610127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.610277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.610304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.610533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.610906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.610961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.611136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.611328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.611358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.611543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.611732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.611759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.611907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.612028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.612055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.612239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.612392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.612422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.612614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.612773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.612800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.612958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.613160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.613189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.613371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.613548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.613577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.613816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.614085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.614114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.614327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.614501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.614531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.614670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.614895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.614964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.615168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.615347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.615373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.615519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.615681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.615708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.615845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.615999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.616026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.616219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.616386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.616430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.616608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.616767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.616811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.617089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.617319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.617348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.617532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.617714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.617779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.618015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.618274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.618334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.286 [2024-04-19 03:34:17.618547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.618685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.286 [2024-04-19 03:34:17.618711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.286 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.618870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.619222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.619272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.619484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.619620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.619647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.619822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.620006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.620032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.620213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.620348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.620375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.620548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.620755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.620785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.620952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.621120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.621147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.621276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.621481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.621511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.621710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.621987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.622033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.622242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.622417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.622447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.622620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.622856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.622904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.623086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.623243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.623269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.623453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.623621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.623650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.623876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.624013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.624041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.624229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.624458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.624516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.624724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.624887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.624916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.625095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.625282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.625311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.625538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.625700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.625729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.625934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.626116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.626159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.626378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.626544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.626571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.626727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.626886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.626929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.627177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.627352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.627401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.627557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.627733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.627763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.627966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.628121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.628148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.628325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.628508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.628536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.628786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.629061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.629090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.629241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.629469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.629525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.629697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.629889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.629956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.630168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.630374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.630411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.287 [2024-04-19 03:34:17.630613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.630903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.287 [2024-04-19 03:34:17.630967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.287 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.631148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.631306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.631333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.631466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.631626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.631652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.631788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.631935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.631976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.632140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.632307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.632336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.632550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.632694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.632725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.632909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.633116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.633145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.633345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.633544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.633574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.633816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.634179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.634233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.634406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.634605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.634634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.634832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.635054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.635106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.635278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.635528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.635559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.635861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.636209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.636266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.636469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.636667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.636697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.636881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.637039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.637081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.637277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.637433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.637460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.637599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.637748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.637774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.637938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.638136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.638170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.638356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.638537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.638567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.638728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.638907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.638937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.639268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.639490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.639521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.639720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.639930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.639957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.640107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.640294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.640321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.640506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.640682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.640712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.640861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.641021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.641048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.641243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.641425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.641454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.641607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.641759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.641786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.641980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.642166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.642213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.642388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.642563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.642590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.288 [2024-04-19 03:34:17.642786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.642985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.288 [2024-04-19 03:34:17.643012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.288 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.643182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.643367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.643407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.643593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.643747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.643774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.643905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.644089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.644116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.644291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.644473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.644504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.644674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.644861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.644888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.645072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.645196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.645223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.645377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.645568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.645594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.645765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.645959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.645993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.646191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.646367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.646404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.646540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.646697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.646740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.646990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.647251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.647303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.647510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.647672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.647698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.647883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.648044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.648073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.648251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.648479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.648532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.648861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.649211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.649263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.649435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.649610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.649640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.649781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.649995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.650061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.650236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.650450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.650481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.650800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.651116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.651168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.651345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.651514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.651542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.651666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.651820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.651849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.651993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.652154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.652195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.652397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.652575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.652604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.652988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.653039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.289 qpair failed and we were unable to recover it. 00:20:40.289 [2024-04-19 03:34:17.653239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.289 [2024-04-19 03:34:17.653411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.653442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.653617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.653774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.653801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.653982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.654139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.654182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.654359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.654534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.654560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.654729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.654996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.655050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.655199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.655354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.655400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.655606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.655840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.655889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.656094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.656295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.656321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.656503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.656653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.656680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.656814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.657000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.657043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.657220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.657429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.657459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.657631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.657817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.657844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.657993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.658174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.658205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.658364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.658530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.658556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.658747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.659049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.659075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.659235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.659402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.659441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.659625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.659811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.659852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.660034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.660236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.660276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.660482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.660658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.660686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.660848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.661097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.661161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.661362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.661547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.661577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.661769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.661932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.661958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.662165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.662336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.662365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.662551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.662724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.662753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.662959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.663129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.663158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.663331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.663510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.663539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.663726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.663879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.663905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.664083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.664265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.664294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.664456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.664612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.664641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.290 qpair failed and we were unable to recover it. 00:20:40.290 [2024-04-19 03:34:17.664809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.290 [2024-04-19 03:34:17.664950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.664976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.665135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.665319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.665348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.665531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.665675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.665710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.665881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.666194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.666247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.666430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.666627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.666786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.666954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.666981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.667115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.667248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.667275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.667498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.667658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.667694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.667857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.668013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.668039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.668240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.668451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.668478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.668617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.668799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.668829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.668995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.669195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.669225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.669402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.669609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.669638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.669802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.670002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.670064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.670199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.670403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.670443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.670582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.670787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.670849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.671049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.671175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.671201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.671403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.671586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.671615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.671813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.672016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.672067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.672214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.672395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.672425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.672599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.672809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.672838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.673036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.673207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.673236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.673442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.673639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.673676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.673842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.674040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.674070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.674244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.674435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.674477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.674661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.674840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.674894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.675098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.675285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.675312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.675448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.675616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.675642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.675836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.675994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.291 [2024-04-19 03:34:17.676021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.291 qpair failed and we were unable to recover it. 00:20:40.291 [2024-04-19 03:34:17.676225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.676370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.676409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.676606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.676768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.676809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.677009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.677257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.677310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.677516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.677671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.677696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.677838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.678028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.678058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.678228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.678477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.678530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.678685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.678828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.678858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.679005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.679161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.679187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.679345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.679556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.679585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.679759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.679910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.679936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.680118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.680306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.680335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.680557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.680785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.680843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.681050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.681224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.681253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.681434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.681609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.681637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.681787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.681963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.681994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.682171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.682343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.682372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.682573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.682880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.682938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.683155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.683309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.683339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.683529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.683678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.683724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.683926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.684088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.684114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.684279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.684463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.684493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.684634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.684876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.684929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.685073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.685243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.685272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.685460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.685633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.685673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.685818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.686027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.686054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.686212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.686386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.686414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.686610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.686841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.686899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.687083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.687235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.687279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.292 [2024-04-19 03:34:17.687464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.687664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.292 [2024-04-19 03:34:17.687704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.292 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.687904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.688084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.688111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.688244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.688422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.688464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.688620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.688790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.688817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.688951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.689113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.689139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.689325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.689467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.689497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.689632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.689834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.689897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.690054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.690194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.690221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.690419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.690608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.690634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.690768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.690918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.690944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.691126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.691266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.691295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.691483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.691610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.691635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.691842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.692025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.692051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.692203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.692409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.692448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.692621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.692843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.692869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.692998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.693176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.693223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.693371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.693566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.693595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.693786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.693970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.694000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.694211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.694414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.694453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.694637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.694797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.694824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.695046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.695194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.695221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.695351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.695533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.695578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.695780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.695946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.695990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.696171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.696331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.696374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.696566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.696727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.696754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.696943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.697157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.697224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.697398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.697607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.697635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.697808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.697992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.698021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.698209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.698370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.698412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.698601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.698889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.293 [2024-04-19 03:34:17.698947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.293 qpair failed and we were unable to recover it. 00:20:40.293 [2024-04-19 03:34:17.699120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.699283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.699313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.699490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.699735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.699791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.699952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.700088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.700117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.700271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.700397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.700425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.700612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.700745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.700771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.700944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.701105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.701133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.701315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.701514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.701543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.701682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.701857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.701886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.702085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.702258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.702292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.702497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.702672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.702702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.702878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.703037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.703102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.703299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.703476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.703506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.703672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.703868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.703897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.704058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.704182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.704209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.704410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.704618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.704658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.704835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.704984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.705014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.705186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.705358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.705394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.705603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.705797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.705849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.706019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.706194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.706225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.706404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.706598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.706625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.706795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.706954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.706981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.707168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.707299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.707328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.707549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.707790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.707843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.707993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.708167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.708196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.294 [2024-04-19 03:34:17.708340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.708521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.294 [2024-04-19 03:34:17.708550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.294 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.708730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.708920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.708980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.709119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.709313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.709342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.709526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.709707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.709734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.709888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.710047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.710079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.710238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.710418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.710453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.710580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.710833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.710886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.711068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.711250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.711296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.711499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.711646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.711680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.711857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.712011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.712054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.712249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.712438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.712465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.712620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.712785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.712812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.712980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.713146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.713175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.713344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.713556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.713586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.713760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.713941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.713968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.714183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.714360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.714397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.714577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.714778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.714807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.714982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.715140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.715167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.715313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.715438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.715465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.715686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.715848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.715890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.716036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.716243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.716272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.716449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.716619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.716649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.716796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.716997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.717053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.717196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.717371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.717406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.717565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.717755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.717793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.717983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.718150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.718176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.718334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.718535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.718565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.718716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.718864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.718892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.719057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.719277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.719306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.295 [2024-04-19 03:34:17.719512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.719662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.295 [2024-04-19 03:34:17.719692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.295 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.719829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.719994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.720024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.720191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.720363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.720400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.720584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.720723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.720751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.720915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.721040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.721067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.721278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.721427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.721457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.721663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.721793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.721821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.721993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.722200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.722230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.722411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.722538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.722581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.722753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.723005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.723056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.723256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.723414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.723442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.723660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.723831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.723860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.724037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.724217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.724246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.724436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.724603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.724629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.724765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.724944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.724971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.725125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.725332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.725362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.725539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.725697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.725725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.725910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.726065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.726092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.726212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.726374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.726418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.726617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.726790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.726819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.727022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.727205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.727232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.727410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.727595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.727622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.727753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.727931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.727961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.728138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.728284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.728313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.728485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.728619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.728646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.728777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.728962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.729005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.729190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.729405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.729435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.729601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.729755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.729782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.729963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.730122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.730164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.296 qpair failed and we were unable to recover it. 00:20:40.296 [2024-04-19 03:34:17.730336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.730523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.296 [2024-04-19 03:34:17.730551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.730708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.730923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.730981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.731160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.731325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.731355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.731515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.731677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.731704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.731894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.732055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.732100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.732282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.732477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.732507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.732670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.732819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.732849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.733030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.733202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.733232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.733433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.733601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.733631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.733833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.734203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.734583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.734869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.734996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.735023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.735173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.735327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.735354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.735537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.735775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.735828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.736011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.736146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.736173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.736336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.736525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.736556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.736738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.736901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.736946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.737120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.737318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.737347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.737548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.737707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.737733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.737918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.738102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.738144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.738309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.738480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.738510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.738682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.738857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.738886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.739044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.739172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.739199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.739379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.739575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.739602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.739808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.739958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.739987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.740191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.740394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.740424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.740608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.740770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.740817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.740995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.741198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.741227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.297 qpair failed and we were unable to recover it. 00:20:40.297 [2024-04-19 03:34:17.741399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.741550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.297 [2024-04-19 03:34:17.741578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.741767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.741990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.742044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.742243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.742417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.742447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.742666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.742820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.742863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.743066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.743235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.743264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.743406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.743551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.743580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.743762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.744018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.744071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.744287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.744442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.744469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.744668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.744892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.744944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.745128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.745286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.745314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.745473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.745604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.745631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.745813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.746040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.746097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.746274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.746450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.746480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.746628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.746772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.746801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.746980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.747136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.747163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.747321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.747542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.747571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.747753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.747916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.747943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.748136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.748291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.748318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.748478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.748650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.748680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.748835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.748994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.749021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.749217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.749363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.749400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.749550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.749752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.749779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.749937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.750089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.750131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.750273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.750415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.750459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.750609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.750757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.750786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.298 [2024-04-19 03:34:17.750990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.751173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.298 [2024-04-19 03:34:17.751199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.298 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.751380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.751575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.751605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.751814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.751964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.751991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.752192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.752400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.752430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.752603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.752753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.752794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.752973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.753186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.753236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.753458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.753616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.753644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.753777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.753937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.753964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.754161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.754311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.754340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.754529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.754686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.754731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.754929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.755137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.755187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.755379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.755565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.755594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.755734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.755924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.755965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.756163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.756352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.756404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.756581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.756759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.756788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.756965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.757172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.757202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.757375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.757559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.757589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.757743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.757894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.757920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.758076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.758249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.758276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.758430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.758594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.758624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.758815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.759022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.759068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.759242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.759457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.759485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.759639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.759854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.759904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.760107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.760253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.760287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.760467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.760629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.760658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.760805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.760962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.760989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.761146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.761351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.761388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.761567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.761799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.761848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.762015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.762160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.762189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.299 [2024-04-19 03:34:17.762398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.762524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.299 [2024-04-19 03:34:17.762568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.299 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.762752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.762913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.762940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.763124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.763263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.763294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.763492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.763719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.763774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.763945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.764145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.764179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.764393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.764553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.764579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.764745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.764953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.764982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.765152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.765360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.765394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.765573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.765723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.765749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.765921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.766091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.766122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.766265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.766465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.766493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.766632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.766809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.766838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.767016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.767195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.767237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.767394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.767569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.767599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.767771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.767918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.767952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.768097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.768278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.768307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.768487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.768654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.768681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.768851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.769013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.769039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.769202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.769375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.769412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.769564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.769754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.769804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.769979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.770107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.770134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.770260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.770431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.770461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.770609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.770759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.770785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.770940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.771094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.771124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.771298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.771435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.771466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.771628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.771782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.771813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.771994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.772178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.772204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.772396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.772553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.772581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.772739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.772921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.772947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.300 [2024-04-19 03:34:17.773122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.773288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.300 [2024-04-19 03:34:17.773317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.300 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.773505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.774450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.774480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.774689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.774856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.774885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.775047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.775208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.775251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.775449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.775611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.775639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.775789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.775961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.775990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.776193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.776411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.776438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.776596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.776750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.776776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.776908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.777064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.777093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.777294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.777466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.777493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.777671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.777799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.777825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.777980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.778111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.778137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.778311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.778467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.778493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.778625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.778781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.778807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.778983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.779133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.779161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.779324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.779471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.779497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.779692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.779817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.779843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.779980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.780112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.780139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.780292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.780454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.780481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.780636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.780798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.780824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.780958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.781182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.781211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.781396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.781546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.781573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.781704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.781923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.781951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.782131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.782289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.782314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.782476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.782632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.782657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.782815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.782976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.783003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.783192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.783372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.783427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.783611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.783769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.783795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.783972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.784183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.784209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.301 [2024-04-19 03:34:17.784343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.784513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.301 [2024-04-19 03:34:17.784542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.301 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.784719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.784931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.784978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.785142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.785304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.785330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.785492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.785677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.785719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.785922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.786201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.786252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.786435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.786572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.786601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.786780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.786907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.786934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.787114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.787290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.787315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.787488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.787616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.787642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.787799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.787999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.788027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.788179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.788313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.788355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.788538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.788696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.788722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.789376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.789549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.789579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.789734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.789918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.789969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.790150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.790307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.790333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.790513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.790688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.790725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.790945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.791143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.791172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.791349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.791522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.791549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.791685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.791875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.791903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.792079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.792287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.792315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.792502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.792627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.792658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.792801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.792955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.792984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.793139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.793302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.793328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.793486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.793641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.793667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.793822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.793978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.794004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.794140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.794312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.794340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.794511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.794692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.794720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.794936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.795083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.795112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.795293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.795476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.795506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.302 [2024-04-19 03:34:17.795671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.795809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-19 03:34:17.795838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.302 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.796014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.796191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.796220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.796432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.796574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.796602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.796778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.796923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.796952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.797096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.797273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.797302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.797491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.797667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.797696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.797864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.798061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.798089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.798232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.798372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.798409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.798568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.798752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.798785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.799010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.799147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.799172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.799299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.799499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.799529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.799679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.799834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.799860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.800020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.800199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.800227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.800416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.800583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.800608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.800831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.800984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.801010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.801144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.801297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.801326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.801507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.801635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.801663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.801820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.801976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.802002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.802195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.802353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.802390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.802579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.802720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.802746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.802879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.803040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.803067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.803261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.803429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.803474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.803643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.803853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.803899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.804080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.804286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.804315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.804471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.804729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.804758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.303 qpair failed and we were unable to recover it. 00:20:40.303 [2024-04-19 03:34:17.804943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.805079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-19 03:34:17.805107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.805266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.805466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.805493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.805678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.805827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.805855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.806039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.806203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.806229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.806434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.806565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.806608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.806825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.807023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.807049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.807185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.807353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.807391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.807558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.807708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.807736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.807895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.808048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.808073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.808202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41860 is same with the state(5) to be set 00:20:40.304 [2024-04-19 03:34:17.808425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.808600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.808629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.808787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.808972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.809001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.809193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.809408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.809452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.809598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.809826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.809865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.810063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.810264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.810292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.810456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.810588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.810615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.810752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.810979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.811024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.811226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.811439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.811470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.811636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.811796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.811821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.811992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.812187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.812216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.812425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.812563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.812590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.812752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.812913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.812942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.813122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.813295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.813323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.813466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.813628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.813654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.813801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.814002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.814032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.814204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.814389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.814419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.814625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.814781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.814807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.814941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.815124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.815150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.815313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.815450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.815478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.815613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.815774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.815800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.815982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.816184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.816212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.304 qpair failed and we were unable to recover it. 00:20:40.304 [2024-04-19 03:34:17.816365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-19 03:34:17.816539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.816566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.816702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.816865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.816907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.817080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.817277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.817306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.817477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.817609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.817635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.817812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.817952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.817981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.818148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.818324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.818352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.818500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.818650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.818693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.818863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.819038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.819066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.819219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.819400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.819445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.819584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.819726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.819767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.820915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.821106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.821138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.821292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.821455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.821482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.821624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.821787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.821814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.822036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.822195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.822223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.822396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.822558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.822585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.822718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.822840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.822866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.823058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.823235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.823273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.823473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.823629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.823679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.823835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.823972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.824014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.824213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.824407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.824451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.824587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.824773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.824802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.825013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.825209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.825238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.305 [2024-04-19 03:34:17.825444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.825610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.305 [2024-04-19 03:34:17.825636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.305 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.825818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.826037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.826063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.826263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.826444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.826472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.826611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.826804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.826832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.827006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.827146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.827174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.827334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.827505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.827532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.827693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.827871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.827898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.828115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.828314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.828342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.828536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.828659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.828702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.828872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.829018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.829047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.829250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.829397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.829453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.829618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.829774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.829804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.829953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.830139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.830191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.830371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.830536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.830562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.830699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.830886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.830928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.831108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.831270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.831296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.831461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.831597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.831623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.831762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.831920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.831946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.832162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.832355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.832410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.832558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.832742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.832770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.832944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.833113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.833142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.833319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.833515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.833542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.833696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.833920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.833965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.834165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.834374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.834412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.584 [2024-04-19 03:34:17.834573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.834712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.584 [2024-04-19 03:34:17.834738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.584 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.834956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.835155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.835202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.835378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.835526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.835552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.835743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.835945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.835974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.836171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.836371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.836412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.836611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.836753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.836781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.836983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.837181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.837209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.837364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.837521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.837548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.837681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.837868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.837894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.838120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.838321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.838350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.838551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.838713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.838761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.838952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.839151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.839180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.839389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.839567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.839593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.839778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.840003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.840050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.840249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.840481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.840509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.840673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.840838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.840867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.841029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.841203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.841232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.841412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.841603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.841632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.841851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.842008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.842034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.842194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.842399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.842429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.842587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.842755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.842799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.842968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.843164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.843193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.843397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.843592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.843618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.843775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.843950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.843979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.844179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.844337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.844363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.844526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.844687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.585 [2024-04-19 03:34:17.844713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.585 qpair failed and we were unable to recover it. 00:20:40.585 [2024-04-19 03:34:17.844976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.845168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.845216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.845396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.845572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.845603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.845792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.845950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.845977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.846117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.846273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.846301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.846464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.846622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.846665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.846838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.847012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.847042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.847243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.847393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.847420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.847557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.847693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.847720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.847882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.848098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.848144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.848343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.848539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.848566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.848748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.848944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.848992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.849158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.849333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.849367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.849560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.849738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.849768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.849938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.850108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.850138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.850390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.850577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.850603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.850743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.850936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.850962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.851102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.851265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.851307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.851507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.851673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.851703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.851885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.852081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.852110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.852279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.852469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.852496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.852625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.852764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.852792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.852982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.853181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.853214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.586 qpair failed and we were unable to recover it. 00:20:40.586 [2024-04-19 03:34:17.853362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.853543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.586 [2024-04-19 03:34:17.853570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.853750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.853929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.853959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.854158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.854331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.854359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.854549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.854684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.854711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.854871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.855024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.855049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.855205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.855391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.855434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.855584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.855746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.855773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.856013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.856229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.856259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.856449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.856582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.856606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.856785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.856914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.856940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.857138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.857279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.857309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.857503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.857639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.857664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.857828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.857967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.857993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.858163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.858333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.858362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.858560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.858738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.858777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.859019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.859212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.859242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.859408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.859539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.859565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.859738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.859922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.859950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.860151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.860321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.860350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.860534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.860675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.860701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.860872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.861079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.861111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.861281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.861507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.861534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.861722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.861891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.861921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.862095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.862285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.862328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.862512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.862652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.862679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.862891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.863082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.863128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.863306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.863528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.863554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.587 qpair failed and we were unable to recover it. 00:20:40.587 [2024-04-19 03:34:17.863685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.587 [2024-04-19 03:34:17.863815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.863841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.864033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.864247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.864277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.864471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.864598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.864624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.864823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.865049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.865100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.865281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.865474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.865500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.865657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.865794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.865820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.865981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.866142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.866168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.866335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.866498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.866525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.866654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.866811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.866838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.866998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.867154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.867183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.867339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.867471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.867498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.867633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.867832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.867879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.868027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.868259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.868289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.868458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.868593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.868620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.868787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.868958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.868991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.869182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.869336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.869362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.869506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.869635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.869661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.869843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.870101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.870130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.870287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.870506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.870533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.870705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.870916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.870963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.871135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.871318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.871347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.871511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.871649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.871675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.871892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.872111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.872157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.588 qpair failed and we were unable to recover it. 00:20:40.588 [2024-04-19 03:34:17.872373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.872539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.588 [2024-04-19 03:34:17.872566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.872727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.872890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.872923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.873185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.873352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.873388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.873548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.873728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.873757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.873895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.874077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.874106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.874290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.874454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.874482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.874611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.874847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.874874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.875061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.875206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.875236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.875474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.875609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.875636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.875765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.875982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.876011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.876190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.876364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.876406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.876593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.876782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.876812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.877019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.877188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.877215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.877367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.877541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.877567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.877733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.877937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.877966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.878172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.878366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.878405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.878566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.878708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.878735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.878935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.879137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.879185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.879329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.879488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.879514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.879648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.879803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.589 [2024-04-19 03:34:17.879829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.589 qpair failed and we were unable to recover it. 00:20:40.589 [2024-04-19 03:34:17.880019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.880182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.880211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.880359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.880521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.880547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.880677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.880824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.880853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.881001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.881172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.881201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.881411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.881565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.881591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.881748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.881943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.881972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.882141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.882318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.882346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.882493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.882638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.882668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.882813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.882952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.882984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.883130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.883301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.883328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.883463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.883600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.883626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.883840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.883989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.884018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.884169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.884348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.884376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.884531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.884665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.884691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.884872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.885072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.885101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.885273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.885436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.885463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.885593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.885748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.885794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.885983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.886143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.886170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.886358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.886525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.886553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.886702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.886856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.886883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.887045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.887225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.887255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.887457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.887591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.887618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.887809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.888008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.888038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.888179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.888352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.888389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.888574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.888713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.888761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.888910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.889151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.889181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.590 qpair failed and we were unable to recover it. 00:20:40.590 [2024-04-19 03:34:17.889351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.590 [2024-04-19 03:34:17.889520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.889547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.889675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.889877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.889920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.890092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.890264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.890303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.890504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.890635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.890664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.890843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.891056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.891102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.891266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.891453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.891480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.891611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.891832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.891879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.892088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.892252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.892282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.892432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.892573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.892601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.892745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.892929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.892956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.893117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.893254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.893282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.893420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.893591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.893617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.893781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.893959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.893990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.894260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.894410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.894455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.894596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.894764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.894794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.894998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.895166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.895195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.895375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.895533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.895559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.895740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.895895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.895943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.896139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.896302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.896331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.896516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.896691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.896720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.896903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.897071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.897101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.897280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.897467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.897495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.897651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.897778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.897820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.898009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.898183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.898225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.898413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.898597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.898624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.898863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.899091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.591 [2024-04-19 03:34:17.899133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.591 qpair failed and we were unable to recover it. 00:20:40.591 [2024-04-19 03:34:17.899286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.899453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.899481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.899654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.899784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.899812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.900023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.900216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.900245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.900446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.900577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.900604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.900735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.900867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.900894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.901081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.901221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.901251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.901462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.901627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.901654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.901841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.901986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.902017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.902217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.902434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.902466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.902602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.902734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.902761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.902938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.903172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.903201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.903361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.903519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.903546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.903718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.903908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.903955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.904153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.904353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.904390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.904568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.904702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.904747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.904917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.905091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.905121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.905292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.905460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.905488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.905619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.905777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.905803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.906004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.906179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.906213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.906394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.906551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.906577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.906726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.906926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.906955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.907108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.907306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.907335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.907521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.907654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.907680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.592 qpair failed and we were unable to recover it. 00:20:40.592 [2024-04-19 03:34:17.907840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.592 [2024-04-19 03:34:17.907996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.908023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.908163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.908331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.908358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.908503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.908654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.908681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.908843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.908999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.909025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.909180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.909350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.909376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.909519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.909668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.909698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.909864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.910086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.910134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.910300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.910463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.910490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.910623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.910818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.910847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.911080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.911292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.911321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.911485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.911614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.911640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.911807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.911960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.911987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.912182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.912349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.912379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.912543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.912674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.912700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.912830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.912960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.912988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.913139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.913303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.913336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.913508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.913636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.913663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.913840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.914028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.914061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.914252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.914445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.914472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.914605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.914834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.914880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.915074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.915260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.915289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.915470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.915603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.915629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.915835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.915975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.916004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.916209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.916388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.916435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.916593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.916726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.916769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.916944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.917138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.593 [2024-04-19 03:34:17.917168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.593 qpair failed and we were unable to recover it. 00:20:40.593 [2024-04-19 03:34:17.917347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.917525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.917552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.917734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.917888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.917915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.918041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.918165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.918192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.918378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.918547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.918573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.918728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.918927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.918957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.919138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.919268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.919294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.919495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.919646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.919673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.919855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.920085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.920119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.920336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.920488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.920516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.920657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.920874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.920904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.921085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.921230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.921256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.921410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.921563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.921591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.921750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.921898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.921947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.922151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.922287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.922314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.922443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.922602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.922629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.922770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.922940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.922969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.923136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.923267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.923294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.923464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.923636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.923662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.923848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.924051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.924099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.924279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.924403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.924429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.924593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.924741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.594 [2024-04-19 03:34:17.924772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.594 qpair failed and we were unable to recover it. 00:20:40.594 [2024-04-19 03:34:17.924963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.925138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.925167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.925372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.925512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.925539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.925695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.925838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.925867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.926017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.926196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.926225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.926405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.926535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.926562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.926709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.926892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.926921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.927131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.927320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.927347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.927488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.927658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.927685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.927838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.927982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.928158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.928541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.928823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.928977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.929096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.929237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.929266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.929420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.929576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.929602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.929785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.929971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.929998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.930177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.930365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.930398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.930587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.930749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.930792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.930990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.931160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.931188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.931340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.931528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.931557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.931715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.931918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.931953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.932142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.932353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.932387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.932577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.932737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.932771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.932963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.933092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.933119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.933318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.933519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.933550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.933697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.933833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.595 [2024-04-19 03:34:17.933862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.595 qpair failed and we were unable to recover it. 00:20:40.595 [2024-04-19 03:34:17.934069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.934198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.934224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.934388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.934585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.934612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.934785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.934985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.935032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.935206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.935366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.935399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.935605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.935750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.935781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.935961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.936150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.936198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.936373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.936556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.936585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.936781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.936953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.936986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.937139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.937340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.937370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.937557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.937727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.937756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.937921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.938104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.938152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.938326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.938500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.938530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.938692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.938852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.938895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.939090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.939262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.939292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.939471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.939703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.939760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.939963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.940115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.940145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.940324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.940457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.940484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.940668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.940848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.940877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.941076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.941224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.941250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.941406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.941590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.941619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.941792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.941953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.942003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.942200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.942372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.942408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.942544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.942733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.942763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.942934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.943103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.943132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.943302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.943457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.596 [2024-04-19 03:34:17.943484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.596 qpair failed and we were unable to recover it. 00:20:40.596 [2024-04-19 03:34:17.943624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.943780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.943806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.943985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.944151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.944180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.944355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.944496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.944539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.944711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.944892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.944927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.945112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.945285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.945314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.945499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.945632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.945660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.945845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.946039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.946088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.946288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.946444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.946472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.946626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.946829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.946858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.947001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.947180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.947209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.947354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.947535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.947566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.947743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.947904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.947947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.948124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.948306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.948350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.948515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.948644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.948670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.948863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.949013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.949039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.949167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.949326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.949352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.949571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.949747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.949776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.949928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.950086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.950129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.950309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.950476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.950506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.950680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.950862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.950889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.951045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.951202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.951229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.951413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.951575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.951606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.951760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.951935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.951964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.952164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.952368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.952406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.952555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.952710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.952739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.597 qpair failed and we were unable to recover it. 00:20:40.597 [2024-04-19 03:34:17.952912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.597 [2024-04-19 03:34:17.953062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.953090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.953258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.953410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.953456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.953652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.953870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.953917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.954123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.954299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.954328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.954527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.954666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.954693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.954853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.955015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.955042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.955230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.955391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.955421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.955627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.955784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.955811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.955977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.956135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.956178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.956325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.956462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.956492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.956644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.956796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.956823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.957003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.957177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.957206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.957410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.957551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.957581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.957747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.957925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.957968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.958167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.958371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.958411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.958578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.958796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.958830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.959048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.959198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.959245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.959469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.959629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.959655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.959813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.960043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.960096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.960247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.960390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.960417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.960630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.960863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.960912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.961053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.961232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.961261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.961443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.961628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.961657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.961839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.962054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.962111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.962275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.962402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.962433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.962632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.962759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.962786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.962940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.963109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.963138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.963297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.963489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.598 [2024-04-19 03:34:17.963520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.598 qpair failed and we were unable to recover it. 00:20:40.598 [2024-04-19 03:34:17.963698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.963855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.963899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.964123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.964320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.964349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.964534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.964715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.964744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.964898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.965027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.965056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.965210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.965403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.965449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.965609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.965772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.965798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.965958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.966090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.966121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.966289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.966486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.966513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.966690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.966841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.966870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.967041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.967195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.967239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.967396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.967598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.967627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.967799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.967953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.967980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.968145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.968295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.968321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.968481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.968645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.968690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.968843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.969042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.969071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.969287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.969422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.969468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.969645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.969792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.969827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.970029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.970226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.970255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.970404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.970564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.970592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.970823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.970975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.971001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.971219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.971417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.971447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.971614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.971750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.971778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.599 [2024-04-19 03:34:17.971945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.972159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.599 [2024-04-19 03:34:17.972185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.599 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.972396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.972594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.972623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.972797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.972970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.972999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.973148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.973286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.973315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.973485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.973653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.973690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.973841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.973973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.974000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.974212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.974411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.974440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.974612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.974844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.974893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.975081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.975258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.975287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.975460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.975611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.975635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.975842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.976185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.976241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.976444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.976592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.976619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.976778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.976931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.976957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.977141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.977325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.977354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.977544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.977725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.977751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.977913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.978070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.978112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.978316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.978475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.978520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.978690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.978855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.978892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.979081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.979262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.979290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.979423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.979578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.979605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.979759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.979970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.980019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.980220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.980397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.980432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.980637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.980833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.980867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.981075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.981259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.981286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.981421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.981609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.981639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.981842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.982058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.982085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.982245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.982406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.982445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.600 qpair failed and we were unable to recover it. 00:20:40.600 [2024-04-19 03:34:17.982580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.600 [2024-04-19 03:34:17.982757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.982787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.982929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.983097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.983126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.983338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.983556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.983586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.983741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.983953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.983980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.984164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.984339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.984369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.984539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.984694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.984736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.984937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.985159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.985209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.985414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.985601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.985630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.985810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.985962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.985988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.986122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.986264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.986293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.986496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.986684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.986711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.986869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.987051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.987078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.987257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.987475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.987523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.987687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.987879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.987905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.988064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.988229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.988258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.988404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.988566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.988592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.988792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.989013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.989064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.989241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.989442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.989472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.989672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.989971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.990033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.990213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.990370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.990406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.990595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.990757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.990784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.990980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.991142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.991188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.991346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.991515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.991542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.991727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.991897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.991947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.992113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.992308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.992338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.992505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.992643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.992673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.601 qpair failed and we were unable to recover it. 00:20:40.601 [2024-04-19 03:34:17.992860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.993021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.601 [2024-04-19 03:34:17.993050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.993208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.993365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.993400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.993552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.993761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.993788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.993946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.994146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.994175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.994336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.994482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.994514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.994691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.994859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.994889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.995044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.995173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.995201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.995414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.995585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.995611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.995767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.995949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.995978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.996132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.996298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.996324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.996483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.996617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.996676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.996827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.997058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.997112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.997291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.997477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.997505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.997636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.997787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.997814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.997969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.998126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.998153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.998309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.998440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.998467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.998627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.998817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.998844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.999042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.999204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.999234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.999394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.999525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.999551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:17.999700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.999899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:17.999925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.000130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.000328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.000357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.000527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.000688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.000714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.000854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.001039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.001064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.001241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.001413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.001441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.001607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.001760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.001787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.001968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.002143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.002173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.002348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.002502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.002531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.602 qpair failed and we were unable to recover it. 00:20:40.602 [2024-04-19 03:34:18.002708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.002866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.602 [2024-04-19 03:34:18.002909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.003127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.003283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.003311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.003517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.003680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.003747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.003900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.004086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.004127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.004278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.004503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.004531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.004668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.004824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.004851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.005027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.005150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.005193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.005411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.005581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.005611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.005771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.006000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.006058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.006234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.006369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.006422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.006604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.006787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.006813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.006964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.007124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.007166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.007350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.007535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.007565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.007737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.007932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.007983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.008183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.008351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.008389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.008598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.008735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.008762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.008956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.009106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.009135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.009315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.009483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.009513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.009665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.009849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.009876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.010075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.010227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.010256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.010453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.010629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.010657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.010817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.010974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.011001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.011130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.011284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.011311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.011484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.011666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.011694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.603 qpair failed and we were unable to recover it. 00:20:40.603 [2024-04-19 03:34:18.011857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.603 [2024-04-19 03:34:18.012043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.012072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.012239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.012397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.012424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.012585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.012812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.012840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.013022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.013197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.013226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.013409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.013639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.013692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.013891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.014089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.014137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.014319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.014480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.014507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.014641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.014822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.014851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.015044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.015203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.015231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.015416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.015592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.015645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.015840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.016016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.016046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.016246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.016461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.016514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.016697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.016902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.016958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.017130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.017326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.017355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.017570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.017771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.017800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.018012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.018151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.018177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.018334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.018491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.018536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.018710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.018881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.018910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.019089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.019266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.019295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.019467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.019619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.019648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.019785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.019923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.019954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.020136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.020298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.020342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.020497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.020680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.020707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.604 qpair failed and we were unable to recover it. 00:20:40.604 [2024-04-19 03:34:18.020865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.021043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.604 [2024-04-19 03:34:18.021072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.021251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.021402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.021430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.021613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.021808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.021835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.022037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.022235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.022265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.022436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.022569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.022595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.022748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.022997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.023057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.023201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.023370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.023408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.023586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.023817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.023873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.024080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.024249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.024284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.024462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.024614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.024643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.024851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.024984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.025028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.025229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.025375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.025412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.025612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.025807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.025870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.026054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.026215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.026242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.026402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.026563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.026589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.026745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.026926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.026969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.027149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.027311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.027337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.027505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.027661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.027692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.027872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.028022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.028053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.028181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.028379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.028423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.028608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.028753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.028782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.028980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.029227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.029280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.029468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.029630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.029656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.029836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.030130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.030188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.030361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.030544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.030574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.030726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.030909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.030951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.605 qpair failed and we were unable to recover it. 00:20:40.605 [2024-04-19 03:34:18.031146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.031316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.605 [2024-04-19 03:34:18.031345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.031538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.031698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.031725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.031877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.032057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.032109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.032308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.032498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.032526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.032688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.032888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.032923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.033124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.033279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.033306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.033453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.033652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.033679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.033844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.033996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.034038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.034199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.034405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.034435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.034605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.034854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.034908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.035105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.035313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.035342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.035556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.035719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.035745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.035875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.036032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.036062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.036265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.036460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.036491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.036673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.036856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.036883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.037086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.037290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.037443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.037575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.037602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.037832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.038044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.038099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.038251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.038451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.038482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.038657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.038830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.038859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.039063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.039237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.039266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.039402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.039545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.039575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.039770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.040085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.040139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.040324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.040496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.040526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.040696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.040914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.040968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.041174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.041350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.041486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.041706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.041883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.606 [2024-04-19 03:34:18.041913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.606 qpair failed and we were unable to recover it. 00:20:40.606 [2024-04-19 03:34:18.042085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.042270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.042296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.042458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.042590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.042617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.042775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.042936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.042963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.043092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.043262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.043290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.043429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.043590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.043616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.043801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.043955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.043998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.044173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.044376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.044411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.044538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.044725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.044767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.044949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.045082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.045109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.045306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.045480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.045511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.045712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.045941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.045971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.046191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.046370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.046409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.046553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.046693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.046723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.046890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.047095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.047151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.047357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.047524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.047551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.047709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.047868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.047895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.048118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.048292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.048321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.048493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.048679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.048723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.048905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.049061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.049088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.049220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.049428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.049458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.049634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.049797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.049824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.049952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.050085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.050112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.050314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.050487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.050517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.050701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.050864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.050891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.051096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.051278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.051305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.051482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.051654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.051682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.051874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.052044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.052070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.607 [2024-04-19 03:34:18.052248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.052393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.607 [2024-04-19 03:34:18.052423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.607 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.052626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.052782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.052808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.052991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.053115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.053142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.053298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.053486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.053518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.053694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.053826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.053856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.054057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.054236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.054265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.054425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.054601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.054630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.054832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.054973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.055002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.055155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.055316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.055362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.055554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.055737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.055766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.055940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.056085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.056115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.056265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.056456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.056499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.056706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.056939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.056994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.057171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.057372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.057411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.057612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.057768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.057795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.057977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.058180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.058206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.058407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.058546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.058576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.058734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.058882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.058908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.059060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.059258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.059287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.059490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.059683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.059749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.059923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.060120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.060149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.060325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.060513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.060540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.060699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.060904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.060957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.061107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.061265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.061291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.061494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.061694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.061723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.061920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.062118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.062145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.062326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.062454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.062482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.062613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.062769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.608 [2024-04-19 03:34:18.062797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.608 qpair failed and we were unable to recover it. 00:20:40.608 [2024-04-19 03:34:18.062951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.063077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.063104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.063262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.063423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.063450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.063664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.063958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.064018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.064215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.064391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.064421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.064589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.064748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.064792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.064944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.065124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.065154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.065304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.065479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.065510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.065670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.065829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.065856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.066058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.066230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.066259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.066470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.066660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.066689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.066850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.067031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.067074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.067242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.067458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.067520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.067727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.067857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.067884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.068045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.068176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.068202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.068333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.068492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.068521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.068695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.069008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.069061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.069217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.069370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.069405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.069625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.069828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.069858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.070032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.070300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.070354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.070566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.070729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.070755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.070971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.071199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.071251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.071401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.071606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.071636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.609 [2024-04-19 03:34:18.071804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.071961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.609 [2024-04-19 03:34:18.071989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.609 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.072143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.072266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.072293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.072461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.072651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.072680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.072881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.073040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.073082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.073234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.073407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.073437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.073611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.073781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.073810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.073983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.074158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.074188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.074396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.074584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.074610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.074765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.074978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.075045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.075254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.075462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.075493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.075666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.075856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.075919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.076129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.076327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.076356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.076567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.076725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.076751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.076884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.077069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.077098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.077272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.077440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.077469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.077649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.077834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.077860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.078022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.078219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.078249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.078427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.078629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.078655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.078813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.079001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.079063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.079242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.079427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.079455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.079590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.079769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.079812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.079994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.080169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.080198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.080351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.080538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.080568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.080748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.081002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.081052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.081256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.081475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.081527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.081701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.081856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.081887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.610 qpair failed and we were unable to recover it. 00:20:40.610 [2024-04-19 03:34:18.082092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.610 [2024-04-19 03:34:18.082264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.082295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.082492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.082737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.082790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.082968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.083245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.083299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.083508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.083668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.083714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.083921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.084055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.084082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.084286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.084490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.084520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.084691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.084989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.085049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.085225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.085400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.085444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.085624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.085797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.085826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.086019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.086191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.086220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.086404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.086583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.086614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.086785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.086997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.087024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.087206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.087389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.087419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.087596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.087780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.087834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.088041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.088265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.088318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.088462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.088594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.088623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.088827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.089030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.089117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.089295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.089481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.089509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.089716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.090003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.090059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.090235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.090418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.090445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.090626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.090831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.090858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.090989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.091114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.091141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.091325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.091482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.091509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.091664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.091822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.091855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.092020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.092201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.092230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.092410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.092588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.092617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.611 qpair failed and we were unable to recover it. 00:20:40.611 [2024-04-19 03:34:18.092790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.092967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.611 [2024-04-19 03:34:18.092993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.093193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.093406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.093433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.093590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.093804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.093870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.094087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.094218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.094245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.094400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.094579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.094606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.094764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.094921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.094948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.095130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.095325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.095355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.095540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.095692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.095723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.095860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.096047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.096076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.096276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.096418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.096449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.096653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.096873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.096922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.097081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.097267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.097312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.097485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.097624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.097654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.097828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.098044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.098100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.098255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.098436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.098463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.098619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.098816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.098845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.099014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.099219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.099245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.099430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.099610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.099639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.099811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.100025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.100080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.100284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.100462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.100493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.100643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.100801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.100844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.101034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.101205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.101234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.101405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.101579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.101607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.101786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.101942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.101986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.102185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.102348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.102377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.612 qpair failed and we were unable to recover it. 00:20:40.612 [2024-04-19 03:34:18.102592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.612 [2024-04-19 03:34:18.102753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.102780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.102943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.103120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.103147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.103319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.103510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.103537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.103725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.103964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.104027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.104233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.104424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.104479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.104627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.104841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.104875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.105064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.105214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.105243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.105397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.105582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.105627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.105768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.105968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.105995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.106171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.106344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.106373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.106583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.106759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.106788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.106966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.107127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.107171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.107347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.107556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.107586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.107776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.107980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.108009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.108224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.108428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.108458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.108633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.108807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.108836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.109028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.109185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.109228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.109375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.109534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.109564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.109768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.110047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.110107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.110263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.110424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.110451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.110677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.110836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.110863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.111059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.111254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.111283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.111492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.111618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.111645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.111808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.111969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.112013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.112191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.112356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.112394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.112555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.112714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.112740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.613 qpair failed and we were unable to recover it. 00:20:40.613 [2024-04-19 03:34:18.112916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.613 [2024-04-19 03:34:18.113092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.113119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.113248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.113420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.113451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.113600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.113733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.113760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.113888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.114067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.114109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.114246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.114416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.114446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.114659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.114875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.114923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.115132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.115304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.115333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.115550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.115710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.115737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.115902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.116187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.116484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.116791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.116964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.117142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.117337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.117366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.117549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.117738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.117764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.117917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.118088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.118121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.118308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.118460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.118488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.118626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.118780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.118819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.119029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.119215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.119254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.119437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.119624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.119650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.119864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.120015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.120042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.120197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.120363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.120422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.120582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.120780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.120807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.120964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.121118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.121148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.614 qpair failed and we were unable to recover it. 00:20:40.614 [2024-04-19 03:34:18.121348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.614 [2024-04-19 03:34:18.121574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.121603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.121765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.122023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.122224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.122449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.122476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.122664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.122800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.122826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.122958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.123085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.123112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.123271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.123439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.123468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.123625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.123792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.123826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.123989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.124161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.124190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.124391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.124529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.124559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.124769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.124917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.124943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.125077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.125261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.125291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.125470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.125611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.125640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.125820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.126023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.126052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.615 [2024-04-19 03:34:18.126258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.126461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.615 [2024-04-19 03:34:18.126492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.615 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.126642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.126814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.126844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.127028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.127187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.127214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.127436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.127577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.127604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.127807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.128011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.128060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.128204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.128363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.128403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.128581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.128773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.128802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.128948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.129110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.129136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.129311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.129473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.129500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.129705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.129944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.130004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.130175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.130345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.130374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.130547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.130683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.130710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.130859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.131082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.895 [2024-04-19 03:34:18.131146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.895 qpair failed and we were unable to recover it. 00:20:40.895 [2024-04-19 03:34:18.131319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.131476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.131504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.131700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.131853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.131884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.132091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.132292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.132322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.132469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.132641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.132672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.132827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.132961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.132989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.133170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.133318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.133347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.133499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.133653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.133680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.133848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.134047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.134076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.134261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.134426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.134454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.134609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.134852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.134908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.135082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.135240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.135267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.135404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.135561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.135588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.135795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.135959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.136006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.136186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.136345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.136396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.136585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.136743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.136786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.136964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.137112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.137138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.137311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.137493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.137520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.137655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.137813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.137839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.138023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.138185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.138213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.138398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.138555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.138581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.138787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.138950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.138999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.139182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.139341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.139391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.139583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.139767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.896 [2024-04-19 03:34:18.139794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.896 qpair failed and we were unable to recover it. 00:20:40.896 [2024-04-19 03:34:18.139967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.140191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.140238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.140398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.140575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.140606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.140772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.140896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.140938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.141120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.141269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.141298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.141499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.141639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.141668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.141822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.141966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.141993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.142206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.142379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.142433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.142583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.142720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.142749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.142955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.143155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.143184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.143377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.143583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.143612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.143787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.143947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.143974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.144147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.144328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.144371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.144559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.144763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.144790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.144994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.145168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.145198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.145369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.145578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.145604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.145759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.145947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.145999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.146210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.146365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.146401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.146535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.146721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.146766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.146919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.147119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.147148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.147355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.147522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.147568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.147743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.147905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.147931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.148066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.148274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.148304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.148491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.148628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.148655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.148814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.148974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.149001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.149174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.149397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.149424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.149629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.149803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.149840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.897 [2024-04-19 03:34:18.150042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.150211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.897 [2024-04-19 03:34:18.150240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.897 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.150406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.150585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.150615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.150798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.150979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.151007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.151188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.151319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.151346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.151513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.151649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.151676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.151899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.152059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.152086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.152264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.152484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.152537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.152673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.152874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.152900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.153052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.153185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.153212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.153407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.153570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.153601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.153734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.153943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.153994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.154198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.154353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.154379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.154520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.154679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.154722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.154868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.155053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.155080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.155232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.155426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.155454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.155613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.155761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.155788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.155972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.156159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.156186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.156361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.156569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.156599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.156747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.156903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.156930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.157078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.157220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.157255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.157454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.157631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.157660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.157838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.158007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.158038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.158211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.158409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.158439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.158617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.158771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.898 [2024-04-19 03:34:18.158798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.898 qpair failed and we were unable to recover it. 00:20:40.898 [2024-04-19 03:34:18.158926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.159108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.159150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.159351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.159529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.159560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.159744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.159942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.159972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.160159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.160336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.160365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.160558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.160712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.160738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.160901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.161049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.161078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.161234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.161372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.161407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.161564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.161693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.161718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.161910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.162037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.162063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.162253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.162397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.162442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.162627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.162760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.162786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.162969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.163138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.163168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.163366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.163549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.163580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.163753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.164004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.164061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.164239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.164389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.164419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.164589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.164888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.165066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.165263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.165292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.165484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.165666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.165694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.165868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.166042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.166073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.166276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.166431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.166463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.166628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.166789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.166834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.167013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.167168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.167195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.167396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.167582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.167609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.167762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.167995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.168043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.168225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.168424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.168455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.168595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.168765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.168794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.899 qpair failed and we were unable to recover it. 00:20:40.899 [2024-04-19 03:34:18.168999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.899 [2024-04-19 03:34:18.169185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.169238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.169423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.169582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.169608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.169766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.169931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.169961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.170168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.170328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.170355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.170495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.170619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.170645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.170796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.170965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.170994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.171162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.171336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.171365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.171554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.171687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.171714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.171900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.172053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.172084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.172257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.172408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.172438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.172604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.172785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.172811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.173015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.173193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.173222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.173425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.173566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.173596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.173782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.173912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.173939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.174070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.174224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.174250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.174439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.174639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.174666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.174824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.175009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.175038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.175225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.175413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.175441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.175594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.175799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.175844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.176017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.176234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.176264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.176419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.176561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.176592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.176762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.176948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.176976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.177110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.177232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.900 [2024-04-19 03:34:18.177258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.900 qpair failed and we were unable to recover it. 00:20:40.900 [2024-04-19 03:34:18.177455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.177620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.177664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.177841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.178011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.178040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.178207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.178387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.178417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.178575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.178773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.178802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.178951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.179093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.179122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.179327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.179478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.179508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.179656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.179827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.179856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.180037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.180180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.180210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.180408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.180547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.180591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.180732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.180878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.180906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.181117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.181270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.181297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.181459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.181587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.181613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.181775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.181985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.182042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.182187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.182354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.182402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.182610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.182739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.182765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.182922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.183138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.183164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.183289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.183440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.183468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.183626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.183749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.183775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.183932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.184116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.184144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.184280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.184454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.184483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.184637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.184795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.184837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.184977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.185172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.185197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.185359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.185497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.185525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.901 qpair failed and we were unable to recover it. 00:20:40.901 [2024-04-19 03:34:18.185655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.185781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.901 [2024-04-19 03:34:18.185808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.186014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.186147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.186175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.186343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.186551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.186582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.186745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.186906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.186932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.187119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.187281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.187310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.187504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.187663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.187690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.187847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.188109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.188161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.188350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.188524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.188551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.188747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.188950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.188977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.189137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.189296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.189323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.189531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.189751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.189804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.189980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.190125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.190154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.190333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.190513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.190542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.190688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.190823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.190852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.190996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.191155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.191181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.191343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.191510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.191554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.191761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.191942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.191969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.192121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.192276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.192303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.192523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.192654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.192681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.192892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.193189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.193246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.193460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.193634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.193668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.193845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.194006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.194032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.194193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.194343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.194372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.194571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.194700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.194727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.194890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.195029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.195056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.195210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.195369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.195408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.902 qpair failed and we were unable to recover it. 00:20:40.902 [2024-04-19 03:34:18.195589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.902 [2024-04-19 03:34:18.195753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.195796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.195973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.196171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.196200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.196418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.196549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.196575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.196717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.196875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.196902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.197084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.197223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.197249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.197392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.197624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.197655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.197809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.197962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.197993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.198195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.198370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.198408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.198594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.198760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.198805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.199003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.199194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.199262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.199441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.199599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.199650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.199820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.199993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.200023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.200193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.200367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.200406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.200567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.200750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.200794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.200942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.201115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.201145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.201314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.201465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.201494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.201663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.201842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.201872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.202048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.202248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.202275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.202457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.202656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.202690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.202866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.203023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.203067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.203241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.203378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.203417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.203563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.203729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.203758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.203938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.204105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.204134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.204339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.204505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.204548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.204721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.204957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.205018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.205222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.205353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.205391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.205594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.205874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.205928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.206078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.206252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.206281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.903 qpair failed and we were unable to recover it. 00:20:40.903 [2024-04-19 03:34:18.206462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.903 [2024-04-19 03:34:18.206589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.206622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.206833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.207068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.207120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.207315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.207457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.207487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.207632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.207816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.207860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.208058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.208237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.208266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.208446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.208604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.208631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.208794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.208919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.208962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.209161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.209336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.209362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.209563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.209691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.209717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.209850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.210050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.210084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.210267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.210444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.210479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.210626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.210797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.210826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.211036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.211168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.211195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.211330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.211521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.211551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.211731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.211889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.211916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.212081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.212295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.212324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.212500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.212697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.212762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.212935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.213108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.213137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.213289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.213472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.213515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.213680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.213879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.213907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.214111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.214294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.214328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.214480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.214616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.214644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.214804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.215027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.215081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.215267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.215442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.215474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.215671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.215807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.215833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.216011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.216200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.216227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.216379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.216587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.216617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.216798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.216958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.216985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.217116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.217277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.217304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.904 qpair failed and we were unable to recover it. 00:20:40.904 [2024-04-19 03:34:18.217497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.904 [2024-04-19 03:34:18.217697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.217726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.217885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.218012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.218038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.218173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.218302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.218336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.218547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.218736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.218787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.218942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.219147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.219207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.219378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.219589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.219619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.219821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.220094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.220141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.220348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.220566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.220597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.220747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.220919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.220949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.221132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.221281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.221307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.221475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.221605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.221632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.221839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.222053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.222104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.222285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.222441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.222468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.222630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.222787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.222813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.222979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.223145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.223174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.223371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.223525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.223556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.223731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.223910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.223970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.224141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.224315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.224345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.224540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.224701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.224745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.224931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.225087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.225115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.225325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.225514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.225543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.225704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.225888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.225917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.226121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.226290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.226319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.905 qpair failed and we were unable to recover it. 00:20:40.905 [2024-04-19 03:34:18.226489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.226680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.905 [2024-04-19 03:34:18.226739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.226965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.227151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.227180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.227369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.227512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.227553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.227702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.227865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.227894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.228032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.228209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.228235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.228402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.228576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.228605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.228792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.228955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.228981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.229164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.229367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.229411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.229594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.229752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.229795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.229977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.230122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.230151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.230352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.230502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.230532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.230693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.230849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.230875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.231052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.231222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.231251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.231423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.231625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.231655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.231868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.232029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.232055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.232219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.232431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.232471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.232630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.232803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.232833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.233042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.233222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.233251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.233457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.233623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.233652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.233851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.234029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.234077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.234259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.234439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.234488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.234670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.234822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.234849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.235005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.235151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.235182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.235333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.235492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.235519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.235650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.235776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.235803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.235957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.236168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.236198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.236376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.236523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.236550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.236724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.236981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.237032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.237214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.237412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.237442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.906 qpair failed and we were unable to recover it. 00:20:40.906 [2024-04-19 03:34:18.237602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.237782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.906 [2024-04-19 03:34:18.237809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.237980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.238129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.238158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.238305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.238520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.238547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.238706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.238936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.238987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.239146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.239319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.239349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.239563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.239764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.239807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.239988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.240172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.240219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.240400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.240551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.240578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.240753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.240912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.240960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.241169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.241326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.241356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.241560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.241733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.241762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.241933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.242106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.242135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.242314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.242471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.242498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.242652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.242823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.242852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.243008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.243137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.243164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.243293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.243428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.243455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.243590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.243722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.243750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.243932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.244108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.244138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.244282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.244435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.244462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.244638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.244777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.244806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.244973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.245163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.245190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.245351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.245524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.245552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.245706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.245835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.245861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.246019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.246185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.246213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.246361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.246567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.246597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.246739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.246907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.246936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.247082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.247246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.247274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.247475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.247642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.247676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.247855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.248029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.248065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.907 qpair failed and we were unable to recover it. 00:20:40.907 [2024-04-19 03:34:18.248223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.248388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.907 [2024-04-19 03:34:18.248415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.248574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.248735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.248762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.248960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.249120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.249183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.249345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.249545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.249576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.249719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.249850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.249879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.250064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.250254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.250283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.250431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.250616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.250643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.250798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.251039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.251100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.251251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.251400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.251431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.251603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.251839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.251890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.252097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.252301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.252330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.252512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.252680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.252707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.252837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.252991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.253019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.253174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.253340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.253366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.253530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.253691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.253735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.253890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.254073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.254099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.254258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.254412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.254439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.254595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.254756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.254781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.254966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.255801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.255835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.256007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.256139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.256165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.256315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.256512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.256542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.256715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.256960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.257010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.257173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.257355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.257409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.257587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.257751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.257780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.257924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.258062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.258092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.258297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.258475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.258505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.258707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.258869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.258912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.259105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.259284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.259314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.259494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.259626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.259660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.908 qpair failed and we were unable to recover it. 00:20:40.908 [2024-04-19 03:34:18.259814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.260060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.908 [2024-04-19 03:34:18.260108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.260283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.260494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.260523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.260713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.260917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.260947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.261118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.261259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.261287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.261490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.261659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.261687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.261860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.261985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.262030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.262204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.262338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.262367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.262551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.262753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.262801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.262975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.263136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.263161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.263347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.263564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.263593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.263769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.263959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.264005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.264210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.264342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.264368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.264535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.264716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.264750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.264902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.265080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.265106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.265265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.265428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.265455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.265613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.265861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.265908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.266113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.266284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.266313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.266529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.266697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.266723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.266882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.267060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.267088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.267267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.267441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.267472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.267639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.267838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.267867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.268039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.268254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.268283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.268465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.268625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.268658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.268875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.269195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.269544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.269845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.269999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.270195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.270339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.270369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.270525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.270731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.270760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.909 qpair failed and we were unable to recover it. 00:20:40.909 [2024-04-19 03:34:18.270910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.271063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.909 [2024-04-19 03:34:18.271089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.271280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.271462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.271489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.271690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.271859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.271907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.272080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.272237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.272267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.272403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.272574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.272602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.272788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.272993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.273041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.273214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.273369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.273419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.273631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.273770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.273795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.273931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.274066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.274092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.274250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.274409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.274455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.274625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.274770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.274799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.274963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.275128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.275157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.275332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.275486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.275515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.275679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.275856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.275886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.276016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.276145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.276171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.276349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.276532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.276562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.276751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.276922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.276986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.277131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.277294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.277323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.277527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.277687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.277712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.277866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.278018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.278046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.278233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.278373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.278406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.278574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.278703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.278729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.278866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.279017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.279044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.279242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.279414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.279444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.279627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.279787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.910 [2024-04-19 03:34:18.279837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.910 qpair failed and we were unable to recover it. 00:20:40.910 [2024-04-19 03:34:18.280034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.280195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.280222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.280444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.280620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.280648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.280802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.280961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.280986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.281203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.281404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.281433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.281615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.281779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.281805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.281964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.282142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.282167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.282321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.282499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.282525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.282682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.282863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.282892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.283067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.283190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.283235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.283396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.283584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.283613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.283813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.284021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.284070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.284409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.284454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.284624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.284812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.284849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.285061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.285247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.285272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.285433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.285591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.285617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.285753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.285911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.285937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.286090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.286222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.286247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.286435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.286593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.286619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.286803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.287007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.287041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.287234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.287412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.287441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.287619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.287745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.287787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.287970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.288133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.288159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.288289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.288497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.288526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.288672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.288852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.288878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.289064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.289229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.289258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.289459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.289669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.289695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.289877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.290030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.290056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.290218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.290372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.290409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.290608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.290760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.911 [2024-04-19 03:34:18.290786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.911 qpair failed and we were unable to recover it. 00:20:40.911 [2024-04-19 03:34:18.290976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.291131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.291157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.291356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.291512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.291542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.291718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.291888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.291916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.292093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.292247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.292273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.292453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.292642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.292670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.292833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.293075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.293125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.293326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.293457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.293485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.293680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.293827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.293855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.294018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.294215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.294244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.294451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.294624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.294652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.294834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.295051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.295076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.295210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.295387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.295417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.295569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.295733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.295759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.295889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.296094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.296148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.296349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.296564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.296593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.296802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.296948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.296978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.297152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.297275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.297301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.297425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.297580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.297606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.297786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.297938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.297964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.298123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.298301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.298329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.298520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.298699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.298767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.298976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.299172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.299201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.299353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.299515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.299544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.299752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.299880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.299906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.300066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.300267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.300295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.300483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.300676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.300701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.300829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.301002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.301030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.301217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.301376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.301412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.912 [2024-04-19 03:34:18.301598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.301752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.912 [2024-04-19 03:34:18.301794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.912 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.301971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.302127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.302154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.302353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.302490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.302534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.302681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.302894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.302931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.303157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.303306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.303334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.303523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.303746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.303815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.303992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.304214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.304273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.304450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.304602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.304631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.304827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.305024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.305052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.305242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.305477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.305506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.305721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.305998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.306050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.306258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.306402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.306436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.306601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.306750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.306781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.306951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.307133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.307160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.307316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.307497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.307526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.307705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.307956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.308008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.308180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.308353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.308399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.308616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.308832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.308861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.309012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.309182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.309211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.309364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.309540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.309584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.309774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.309935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.309961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.310133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.310335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.310362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.310562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.310702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.310728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.310892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.311054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.311080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.311282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.311524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.311553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.311730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.311965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.312013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.312188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.312362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.312395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.312534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.312703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.312730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.312888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.313110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.313136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.913 [2024-04-19 03:34:18.313291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.313420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.913 [2024-04-19 03:34:18.313446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.913 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.313647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.313797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.313824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.313985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.314118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.314144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.314315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.314519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.314548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.314752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.314910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.314951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.315153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.315335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.315364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.315585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.315766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.315795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.315997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.316204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.316230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.316432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.316595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.316624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.316815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.316996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.317022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.317191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.317389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.317417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.317569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.317816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.317872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.318084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.318261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.318291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.318506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.318692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.318751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.318948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.319235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.319289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.319478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.319612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.319654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.319847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.320105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.320154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.320355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.320539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.320568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.320755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.320890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.320917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.321105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.321302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.321328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.321514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.321681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.321710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.321881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.322015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.322057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.322193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.322362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.322398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.322583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.322779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.322826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.322983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.323168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.323211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.914 qpair failed and we were unable to recover it. 00:20:40.914 [2024-04-19 03:34:18.323393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.323575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.914 [2024-04-19 03:34:18.323603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.323797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.323949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.323979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.324143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.324301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.324332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.324526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.324677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.324704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.324881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.325114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.325143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.325317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.325479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.325506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.325639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.325832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.325861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.326027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.326187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.326213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.326535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.326567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.326719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.326904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.326930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.327064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.327226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.327252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.327421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.327558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.327584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.327744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.327934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.327960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.328143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.328344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.328370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.328543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.328692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.328718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.328908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.329162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.329215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.329406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.329584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.329612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.329789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.329948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.329974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.330174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.330371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.330413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.330606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.330850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.330907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.331090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.331264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.331292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.331480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.331631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.331659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.331809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.332008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.332037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.332226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.332430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.332459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.332636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.332834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.332860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.333026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.333318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.333375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.333601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.333809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.333861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.334074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.334208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.334234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.334397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.334562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.334587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.915 [2024-04-19 03:34:18.334761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.334916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.915 [2024-04-19 03:34:18.334942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.915 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.335071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.335221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.335253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.335379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.335554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.335579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.335769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.335927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.335953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.336138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.336321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.336350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.336583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.336716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.336758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.336936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.337071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.337097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.337281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.337452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.337481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.337655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.337808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.337836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.338042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.338172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.338198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.338392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.338559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.338604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.338788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.338955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.338983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.339185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.339307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.339333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.339466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.339621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.339652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.339851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.340016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.340044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.340254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.340422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.340447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.340583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.340740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.340766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.340926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.341081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.341106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.341266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.341428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.341454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.341611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.341758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.341787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.341982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.342183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.342212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.342395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.342561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.342587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.342772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.342949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.342975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.343167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.343293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.343319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.343475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.343609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.343641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.343826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.343953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.343979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.344177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.344333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.344359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.344580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.344786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.344841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.345017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.345210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.345238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.345435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.345608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.916 [2024-04-19 03:34:18.345644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.916 qpair failed and we were unable to recover it. 00:20:40.916 [2024-04-19 03:34:18.345822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.345957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.345987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.346173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.346352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.346387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.346575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.346759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.346785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.346946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.347127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.347156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.347355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.347540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.347569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.347722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.347887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.347915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.348072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.348257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.348282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.348426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.348616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.348649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.348830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.348956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.348982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.349162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.349307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.349335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.349516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.349701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.349744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.349894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.350057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.350085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.350267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.350448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.350474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.350648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.350789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.350817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.350994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.351165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.351194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.351372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.351534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.351575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.351731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.351941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.351967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.352168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.352317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.352345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.352510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.352694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.352720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.352906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.353113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.353139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.353337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.353532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.353560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.353724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.353925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.353954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.354130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.354301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.354329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.354511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.354696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.354722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.354874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.355008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.355050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.355251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.355425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.355454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.355630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.355836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.355861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.356047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.356203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.356245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.356412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.356555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.356584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.917 qpair failed and we were unable to recover it. 00:20:40.917 [2024-04-19 03:34:18.356759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.917 [2024-04-19 03:34:18.356918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.356946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.357110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.357287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.357316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.357532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.357689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.357714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.357857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.358004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.358032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.358180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.358319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.358345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.358556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.358725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.358754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.358921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.359168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.359220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.359421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.359613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.359641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.359826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.359982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.360007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.360219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.360417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.360446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.360628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.360760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.360785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.360965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.361132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.361201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.361407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.361589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.361618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.361816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.361993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.362019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.362198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.362396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.362426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.362600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.362796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.362825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.362999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.363183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.363208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.363359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.363569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.363594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.363774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.363894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.363919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.364076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.364230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.364255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.364415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.364561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.364589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.364736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.364907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.364932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.365063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.365259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.365292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.365492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.365728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.365787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.365959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.366158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.366184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.366341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.366528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.366554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.366742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.366918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.366946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.367118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.367266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.367296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.367468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.367597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.367623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.918 qpair failed and we were unable to recover it. 00:20:40.918 [2024-04-19 03:34:18.367808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.918 [2024-04-19 03:34:18.367958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.367984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.368177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.368350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.368378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.368567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.368700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.368726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.368901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.369198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.369266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.369466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.369614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.369643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.369791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.369947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.369993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.370199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.370356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.370406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.370590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.370785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.370813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.371011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.371166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.371192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.371349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.371484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.371510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.371672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.371827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.371852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.372019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.372191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.372219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.372362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.372549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.372575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.372734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.372906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.372934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.373087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.373234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.373259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.373429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.373608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.373637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.373810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.374008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.374054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.374242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.374426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.374452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.374629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.374801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.374829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.374994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.375170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.375198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.375341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.375504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.375546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.375746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.375924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.375952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.376137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.376318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.376343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.376502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.376630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.376673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.376877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.377093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.377144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.919 qpair failed and we were unable to recover it. 00:20:40.919 [2024-04-19 03:34:18.377348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.919 [2024-04-19 03:34:18.377501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.377531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.377711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.377866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.377892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.378097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.378288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.378316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.378514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.378677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.378706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.378885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.379047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.379073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.379204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.379360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.379392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.379576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.379846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.379897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.380083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.380287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.380316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.380489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.380634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.380662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.380838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.381037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.381099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.381260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.381450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.381497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.381669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.381887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.381938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.382140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.382342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.382372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.382573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.382707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.382733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.382920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.383146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.383203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.383400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.383607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.383653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.383805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.383962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.383988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.384147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.384332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.384361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.384530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.384714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.384751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.384933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.385138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.385167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.385326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.385541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.385568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.385769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.386080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.386137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.386341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.386536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.386565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.386715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.386913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.386942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.387084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.387258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.387289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.387462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.387633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.387659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.387786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.387964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.387992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.388191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.388363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.388398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.388581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.388759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.388825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.920 qpair failed and we were unable to recover it. 00:20:40.920 [2024-04-19 03:34:18.389005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.389161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.920 [2024-04-19 03:34:18.389192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.389343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.389544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.389574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.389761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.389915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.389941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.390099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.390253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.390294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.390486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.390644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.390670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.390852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.390986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.391013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.391169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.391320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.391346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.391543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.391726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.391752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.391904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.392075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.392105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.392277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.392424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.392454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.392639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.392798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.392826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.392991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.393147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.393172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.393361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.393522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.393565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.393768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.393978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.394004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.394154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.394327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.394356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.394526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.394722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.394785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.394952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.395135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.395206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.395391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.395603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.395662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.395844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.396040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.396107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.396246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.396480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.396540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.396746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.396903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.396929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.397087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.397271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.397301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.397478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.397676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.397705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.397910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.398102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.398159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.398334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.398543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.398572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.398742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.398878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.398905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.399062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.399218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.399244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.399428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.399566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.399595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.399767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.399934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.399963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.921 [2024-04-19 03:34:18.400141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.400341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.921 [2024-04-19 03:34:18.400370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.921 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.400566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.400724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.400750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.400909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.401159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.401212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.401398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.401585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.401613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.401794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.402020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.402072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.402276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.402441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.402470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.402674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.402828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.402854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.403022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.403191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.403220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.403427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.403584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.403628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.403804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.403964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.403990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.404146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.404329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.404355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.404502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.404634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.404663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.404851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.405076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.405103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.405264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.405458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.405502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.405698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.405872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.405898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.406051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.406225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.406254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.406455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.406623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.406651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.406828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.407045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.407098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.407296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.407434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.407460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.407616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.407844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.407899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.408102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.408278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.408307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.408490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.408674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.408701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.408875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.409037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.409071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.409278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.409488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.409518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.409689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.409889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.409918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.410106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.410230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.410256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.410424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.410557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.410583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.410744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.410891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.410930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.411090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.411276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.411306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.411473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.411651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.922 [2024-04-19 03:34:18.411680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.922 qpair failed and we were unable to recover it. 00:20:40.922 [2024-04-19 03:34:18.411887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.412042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.412081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.412288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.412423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.412463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.412631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.412807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.412836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.412994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.413146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.413172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.413302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.413427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.413454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.413611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.413780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.413808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.413964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.414115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.414141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.414298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.414439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.414469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.414681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.414839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.414865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.415056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.415235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.415260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.415429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.415609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.415635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.415807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.415959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.415985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.416143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.416301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.416329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.416509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.416639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.416694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.416829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.416998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.417027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.417195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.417337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.417366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.417562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.417686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.417720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.417911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.418096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.418125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.418317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.418507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.418534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.418686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.418855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.418898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.419037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.419198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.419227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.419396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.419534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.419559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.419752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.419941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.419966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.420145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.420315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.420344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.923 qpair failed and we were unable to recover it. 00:20:40.923 [2024-04-19 03:34:18.420535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.420720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.923 [2024-04-19 03:34:18.420750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.420895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.421098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.421141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.421292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.421473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.421499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.421665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.421800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.421846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.421989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.422154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.422183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.422355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.422512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.422538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.422673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.422802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.422829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.422991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.423120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.423166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.423323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.423494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.423520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.423682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.423808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.423834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.424036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.424181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.424210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.424362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.424557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.424583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.424731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.424882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.424908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.425043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.425170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.425196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.425388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.425583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.425609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.425746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.425867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.425893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.426097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.426245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.426274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.426458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.426585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.426610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.426807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.426947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.426978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.427178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.427402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.427443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.427607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.427895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.427953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.428276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.428461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.428487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.428645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.428855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.428882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.429039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.429226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.429266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.429445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.429582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.429608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.429780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.429923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.429949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.430104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.430224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.430266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.430445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.430595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.430620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.430846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.430979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.431008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.924 [2024-04-19 03:34:18.431178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.431347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.924 [2024-04-19 03:34:18.431396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.924 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.431552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.431712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.431755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.431934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.432174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.432233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.432437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.432571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.432596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.432779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.432963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.432994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.433174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.433344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.433373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.433524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.433714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.433739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.433893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.434093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.434121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.434295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.434461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.434491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.434670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.434852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.434894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.435068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.435229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.435255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.435425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.435574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.435599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.435777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.435948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.435977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.436133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.436294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.436320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.436490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.436666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.436695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.436867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.437083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.437108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.437252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.437409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.437457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.437615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.437769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.925 [2024-04-19 03:34:18.437798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:40.925 qpair failed and we were unable to recover it. 00:20:40.925 [2024-04-19 03:34:18.437964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.201 [2024-04-19 03:34:18.438135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.438161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.438286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.438434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.438461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.438588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.438726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.438752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.438886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.439045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.439071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.439221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.439443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.439469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.439626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.439760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.439786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.439937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.440072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.440100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.440276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.440457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.440496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.440643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.440778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.440807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.441002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.441169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.441197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.441401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.441547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.441573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.441732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.441862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.441889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.442051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.442234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.442263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.442465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.442602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.442656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.442863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.443153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.443442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.443769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.443956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.444118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.444313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.444342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.444537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.444691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.444717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.444890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.445175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.445241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.445418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.445617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.445653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.445815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.445975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.446001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.446179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.446360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.446394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.446565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.446733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.446760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.446950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.447219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.447271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.202 qpair failed and we were unable to recover it. 00:20:41.202 [2024-04-19 03:34:18.447487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.202 [2024-04-19 03:34:18.447619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.447645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.447810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.447994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.448023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.448177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.448312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.448339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.448542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.448730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.448756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.448907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.449088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.449118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.449297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.449436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.449462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.449620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.449895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.449947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.450158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.450318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.450348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.450518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.450696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.450725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.450883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.451080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.451109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.451292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.451465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.451494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.451670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.451809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.451837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.452001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.452161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.452190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.452354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.452503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.452529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.452681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.452868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.452894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.453100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.453284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.453310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.453469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.453625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.453675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.453852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.453987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.454014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.454170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.454360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.454393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.454582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.454765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.454793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.454920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.455080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.455107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.455255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.455445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.455472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.455635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.455792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.455819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.455958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.456113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.456139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.203 [2024-04-19 03:34:18.456300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.456489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.203 [2024-04-19 03:34:18.456516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.203 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.456699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.456929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.456985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.457161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.457362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.457400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.457555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.457700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.457729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.457903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.458052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.458080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.458246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.458494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.458524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.458673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.458852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.458878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.459003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.459208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.459237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.459430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.459590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.459633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.459778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.459973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.460001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.460178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.460349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.460378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.460598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.460747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.460806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.460984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.461165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.461191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.461364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.461559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.461588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.461801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.461979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.462008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.462157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.462358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.462397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.462573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.462749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.462775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.462956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.463114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.463159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.463312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.463513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.463543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.463713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.463923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.463950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.464106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.464283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.464312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.464490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.464667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.464696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.464837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.464989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.465016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.465182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.465395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.465425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.465633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.465769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.204 [2024-04-19 03:34:18.465810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.204 qpair failed and we were unable to recover it. 00:20:41.204 [2024-04-19 03:34:18.466018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.466184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.466238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.466427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.466558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.466584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.466720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.466886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.466931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.467087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.467284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.467313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.467495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.467648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.467675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.467857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.468042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.468072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.468214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.468370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.468403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.468566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.468700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.468727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.468888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.469102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.469128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.469257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.469412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.469443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.469604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.469730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.469773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.469923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.470100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.470128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.470328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.470513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.470540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.470665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.470784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.470810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.470965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.471133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.471162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.471334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.471511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.471541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.471726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.471856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.471882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.472047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.472258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.472286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.472440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.472626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.472653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.472804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.472937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.472963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.473131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.473309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.473337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.473490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.473648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.473674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.473865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.474017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.474043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.474170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.474357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.474394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.474595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.474829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.474877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.475033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.475166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.475192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.205 qpair failed and we were unable to recover it. 00:20:41.205 [2024-04-19 03:34:18.475371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.205 [2024-04-19 03:34:18.475556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.475585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.475739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.475907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.475933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.476089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.476262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.476291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.476450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.476611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.476637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.476839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.476991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.477017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.477181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.477359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.477425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.477602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.477776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.477803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.477982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.478181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.478207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.478337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.478500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.478527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.478697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.478866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.478892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.479023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.479237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.479267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.479452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.479628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.479657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.479858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.480034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.480062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.480237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.480413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.480443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.480628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.480802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.480831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.480968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.481162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.481191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.481356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.481539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.481569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.481752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.481904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.481947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.482122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.482295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.482323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.482504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.482658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.482685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.482848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.483013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.483055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.483227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.483364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.483401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.483568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.483759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.483806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.483992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.484156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.484185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.484328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.484526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.484553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.484732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.484867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.484896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.206 qpair failed and we were unable to recover it. 00:20:41.206 [2024-04-19 03:34:18.485083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.206 [2024-04-19 03:34:18.485250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.485293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.485472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.485617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.485646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.485792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.485938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.485967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.486171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.486334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.486360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.486517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.486642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.486668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.486806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.486967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.486994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.487169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.487373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.487405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.487527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.487664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.487690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.487869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.488070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.488102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.488283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.488446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.488473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.488611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.488755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.488783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.488970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.489120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.489146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.489289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.489443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.489470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.489610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.489770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.489799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.489966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.490097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.490126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.490310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.490485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.490515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.490701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.490868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.490917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.491092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.491231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.491262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.491407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.491553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.491578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.491743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.491919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.491948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.492101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.492261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.492287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.207 qpair failed and we were unable to recover it. 00:20:41.207 [2024-04-19 03:34:18.492425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.207 [2024-04-19 03:34:18.492575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.492601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.492756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.492927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.492955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.493100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.493276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.493302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.493429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.493591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.493633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.493834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.493996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.494044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.494223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.494398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.494437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.494621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.494858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.494921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.495097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.495269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.495297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.495479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.495665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.495690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.495827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.495982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.496009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.496188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.496372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.496405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.496584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.496733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.496761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.496938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.497097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.497125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.497285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.497442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.497470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.497628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.497807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.497836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.498014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.498174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.498200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.498357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.498543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.498570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.498748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.498948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.498977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.499119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.499276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.499301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.499495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.499647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.499673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.499864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.500041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.500073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.500226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.500406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.500432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.500594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.500721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.500746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.500897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.501078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.208 [2024-04-19 03:34:18.501103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.208 qpair failed and we were unable to recover it. 00:20:41.208 [2024-04-19 03:34:18.501257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.501414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.501440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.501569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.501726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.501751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.501899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.502028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.502055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.502240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.502363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.502394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.502549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.502746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.502771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.502898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.503056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.503082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.503261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.503428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.503453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.503611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.503769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.503794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.503925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.504102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.504127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.504290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.504450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.504475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.504645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.504811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.504836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.504989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.505162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.505189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.505477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.505639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.505679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.505878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.506072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.506118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.506291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.506474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.506503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.506685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.506822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.506849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.506997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.507165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.507193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.507331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.507514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.507539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.507673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.507854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.507894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.508098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.508269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.508296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.508466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.508603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.508627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.508837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.509019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.509047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.509242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.509468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.509494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.509716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.509910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.509955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.209 qpair failed and we were unable to recover it. 00:20:41.209 [2024-04-19 03:34:18.510173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.209 [2024-04-19 03:34:18.510371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.510410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.510620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.510772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.510799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.510994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.511184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.511228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.511425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.511583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.511607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.511764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.511959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.511986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.512133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.512296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.512323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.512514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.512690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.512718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.512917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.513082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.513125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.513263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.513452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.513478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.513606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.513767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.513808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.514018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.514175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.514214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.514402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.514598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.514623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.514765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.514919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.514958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.515109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.515273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.515300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.515520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.515687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.515711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.515890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.516071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.516115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.516306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.516493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.516518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.516648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.516828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.516869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.517063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.517222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.517261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.517448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.517579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.517604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.517755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.517939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.517966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.518176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.518371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.518407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.518601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.518761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.518788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.518958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.519142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.519174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.519395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.519554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.519579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.519764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.519964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.519992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.210 [2024-04-19 03:34:18.520187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.520387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.210 [2024-04-19 03:34:18.520415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.210 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.520601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.520729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.520769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.520941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.521111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.521138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.521343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.521505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.521531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.521725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.521859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.521883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.522020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.522206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.522234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.522389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.522570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.522598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.522767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.522966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.522993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.523165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.523335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.523362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.523521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.523719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.523746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.523925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.524082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.524107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.524313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.524525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.524554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.524732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.524928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.524973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.525176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.525331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.525371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.525542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.525706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.525733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.525863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.526084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.526117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.526285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.526443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.526469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.526612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.526762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.526790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.526938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.527140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.527167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.527337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.527548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.527577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.527746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.527919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.527944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.528106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.528235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.528259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.528417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.528598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.528626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.528796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.528960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.528988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.529160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.529320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.529347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.211 [2024-04-19 03:34:18.529569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.529698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.211 [2024-04-19 03:34:18.529727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.211 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.529872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.530051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.530093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.530264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.530463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.530492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.530655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.530837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.530861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.531032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.531237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.531264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.531454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.531583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.531609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.531762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.531931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.531958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.532102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.532310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.532335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.532471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.532604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.532629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.532759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.532911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.532952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.533150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.533315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.533342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.533526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.533654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.533679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.533810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.533961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.533985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.534146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.534300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.534324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.534529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.534688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.534713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.534876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.535075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.535107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.535327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.535508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.535533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.212 qpair failed and we were unable to recover it. 00:20:41.212 [2024-04-19 03:34:18.535690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.535817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.212 [2024-04-19 03:34:18.535842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.535986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.536137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.536179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.536377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.536578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.536603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.536785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.536971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.537017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.537196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.537365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.537412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.537589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.537738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.537765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.537948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.538099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.538123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.538275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.538442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.538485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.538679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.538867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.538912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.539085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.539250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.539277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.539471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.539656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.539683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.539855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.540031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.540059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.540227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.540403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.540436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.540628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.540782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.540823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.540973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.541143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.541170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.541368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.541555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.541582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.541761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.541895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.541919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.542052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.542180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.542204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.542327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.542459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.542486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.542669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.542812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.542839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.543040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.543169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.543194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.543350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.543527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.543556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.543730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.543923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.543951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.544147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.544342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.544370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.544601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.544799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.544842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.213 qpair failed and we were unable to recover it. 00:20:41.213 [2024-04-19 03:34:18.545043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.213 [2024-04-19 03:34:18.545212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.545243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.545435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.545576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.545603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.545772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.545912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.545939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.546141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.546342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.546369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.546524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.546696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.546723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.546929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.547086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.547110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.547294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.547446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.547474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.547678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.547895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.547938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.548135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.548294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.548319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.548470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.548648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.548680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.548881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.549007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.549032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.549179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.549377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.549414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.549618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.549745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.549770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.549901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.550050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.550075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.550229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.550450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.550475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.550657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.550828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.550856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.551024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.551192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.551220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.551419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.551584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.551612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.551790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.551953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.551978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.552161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.552342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.552370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.552559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.552775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.552801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.552935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.553092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.553118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.553306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.553518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.553546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.553689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.553898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.553922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.554081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.554215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.554256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.554425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.554577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.554602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.214 qpair failed and we were unable to recover it. 00:20:41.214 [2024-04-19 03:34:18.554730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.214 [2024-04-19 03:34:18.554878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.554904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.555077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.555208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.555233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.555357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.555559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.555585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.555738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.555935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.555962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.556118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.556318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.556344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.556520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.556710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.556735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.556862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.556988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.557170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.557451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.557776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.557958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.558098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.558252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.558294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.558436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.558617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.558642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.558793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.558983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.559008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.559191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.559391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.559431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.559575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.559743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.559769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.559922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.560081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.560105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.560261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.560413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.560439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.560598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.560725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.560749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.560902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.561031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.561055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.561207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.561361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.561405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.561533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.561693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.561718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.561870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.562033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.562058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.562239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.562397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.562422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.562604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.562784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.562808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.562965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.563128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.563153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.215 qpair failed and we were unable to recover it. 00:20:41.215 [2024-04-19 03:34:18.563305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.215 [2024-04-19 03:34:18.563465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.563491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.563652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.563834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.563858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.563992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.564147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.564171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.564304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.564462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.564487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.564614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.564772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.564796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.564930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.565111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.565135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.565293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.565449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.565474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.565599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.565755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.565779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.565938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.566091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.566115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.566264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.566400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.566429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.566582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.566709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.566734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.566887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.567070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.567095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.567249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.567402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.567427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.567566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.567693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.567717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.567844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.568019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.568043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.568224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.568387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.568413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.568574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.568727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.568752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.568934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.569082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.569107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.569288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.569444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.569469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.569627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.569808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.569836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.569990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.570148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.570173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.570325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.570512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.570538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.570673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.570831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.570855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.570985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.571117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.571143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.571291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.571453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.571479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.216 qpair failed and we were unable to recover it. 00:20:41.216 [2024-04-19 03:34:18.571613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.216 [2024-04-19 03:34:18.571765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.571789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.571980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.572131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.572156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.572310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.572470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.572495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.572649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.572805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.572829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.572985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.573137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.573161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.573298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.573449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.573475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.573601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.573754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.573778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.573908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.574064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.574088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.574245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.574400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.574425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.574606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.574745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.574770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.574931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.575082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.575106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.575256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.575411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.575436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.575575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.575730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.575755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.575936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.576065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.576090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.576247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.576378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.576408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.576578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.576732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.576756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.576887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.577019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.577044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.577205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.577355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.577397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.577559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.577740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.577765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.577947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.578123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.578148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.578305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.578431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.578457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.578637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.578790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.578815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.217 qpair failed and we were unable to recover it. 00:20:41.217 [2024-04-19 03:34:18.578987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.217 [2024-04-19 03:34:18.579144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.579171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.579324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.579450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.579475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.579634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.579817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.579842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.579998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.580130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.580155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.580344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.580507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.580532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.580686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.580841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.580865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.581055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.581179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.581204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.581410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.581563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.581589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.581722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.581878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.581903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.582083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.582258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.582282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.582438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.582589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.582614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.582749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.582879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.582904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.583029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.583151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.583175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.583322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.583486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.583511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.583641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.583796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.583821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.583976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.584149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.584174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.584303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.584441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.584465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.584647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.584807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.584832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.585013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.585171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.585195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.585326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.585486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.585511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.585643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.585821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.585845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.586006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.586132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.586157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.586290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.586449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.586475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.586623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.586774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.586802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.586961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.587088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.587112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.587293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.587451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.587476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.587614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.587771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.587796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.218 qpair failed and we were unable to recover it. 00:20:41.218 [2024-04-19 03:34:18.587954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.218 [2024-04-19 03:34:18.588134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.588159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.588289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.588423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.588449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.588602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.588735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.588760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.588918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.589075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.589099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.589250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.589376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.589417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.589602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.589738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.589762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.589916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.590068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.590093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.590257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.590415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.590441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.590595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.590753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.590779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.590941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.591122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.591147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.591304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.591467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.591493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.591648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.591781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.591805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.591966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.592115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.592140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.592271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.592427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.592453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.592606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.592737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.592761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.592914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.593102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.593127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.593305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.593456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.593481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.593610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.593794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.593819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.593974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.594123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.594147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.594301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.594433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.594459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.594589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.594765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.594790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.594917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.595092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.595117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.595271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.595427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.595453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.595584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.595746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.595770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.595957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.596083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.596108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.219 qpair failed and we were unable to recover it. 00:20:41.219 [2024-04-19 03:34:18.596269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.219 [2024-04-19 03:34:18.596401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.596427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.596585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.596712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.596736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.596869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.597173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.597482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.597791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.597977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.598002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.598160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.598308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.598332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.598513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.598668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.598693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.598849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.599033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.599057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.599217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.599377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.599408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.599539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.599667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.599691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.599875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.600032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.600057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.600211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.600396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.600421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.600600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.600753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.600778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.600934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.601059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.601083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.601238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.601407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.601433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.601591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.601748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.601774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.601935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.602086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.602110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.602267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.602448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.602474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.602636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.602818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.602843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.602975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.603119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.603144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.603326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.603508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.603534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.603690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.603846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.603877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.604038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.604187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.604211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.220 qpair failed and we were unable to recover it. 00:20:41.220 [2024-04-19 03:34:18.604396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.604521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.220 [2024-04-19 03:34:18.604546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.604677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.604829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.604853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.605005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.605159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.605183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.605366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.605547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.605573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.605759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.605909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.605934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.606094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.606248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.606272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.606400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.606532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.606556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.606710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.606834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.606859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.606990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.607174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.607198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.607364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.607529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.607555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.607711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.607862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.607886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.608042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.608204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.608229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.608429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.608585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.608610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.608768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.608922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.608946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.609133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.609262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.609286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.609468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.609598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.609623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.609780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.609902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.609927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.610088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.610242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.610267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.610422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.610577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.610602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.610764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.610914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.610939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.611089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.611249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.611273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.611431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.611589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.611613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.611768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.611920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.611944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.612127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.612308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.612332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.612460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.612622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.612647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.612807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.612988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.613013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.613168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.613330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.613354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.613526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.613659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.613684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.221 qpair failed and we were unable to recover it. 00:20:41.221 [2024-04-19 03:34:18.613812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.221 [2024-04-19 03:34:18.613972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.613997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.614158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.614315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.614339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.614514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.614675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.614699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.614830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.614980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.615004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.615192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.615316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.615340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.615527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.615676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.615701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.615834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.615993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.616145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.616430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.616743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.616952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.617113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.617237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.617261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.617424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.617559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.617584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.617742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.617898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.617923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.618071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.618228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.618253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.618410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.618531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.618555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.618739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.618889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.618913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.619069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.619252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.619277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.619414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.619604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.619628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.619778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.619923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.619948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.620101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.620279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.620303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.620463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.620596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.620621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.620756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.620937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.620966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.621157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.621311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.222 [2024-04-19 03:34:18.621335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.222 qpair failed and we were unable to recover it. 00:20:41.222 [2024-04-19 03:34:18.621471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.621626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.621651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.621783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.621942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.621966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.622134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.622316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.622340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.622481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.622607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.622632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.622812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.622958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.622983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.623141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.623321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.623345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.623511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.623661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.623685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.623875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.624032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.624056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.624210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.624370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.624404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.624544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.624729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.624753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.624881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.625034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.625058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.625209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.625359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.625404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.625561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.625719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.625743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.625896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.626056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.626080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.626206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.626368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.626400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.626529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.626685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.626712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.626868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.627047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.627071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.627229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.627407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.627432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.627565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.627724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.627749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.627934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.628088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.628113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.628273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.628441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.628466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.628649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.628802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.628826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.628987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.629169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.629193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.629370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.629547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.629572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.223 qpair failed and we were unable to recover it. 00:20:41.223 [2024-04-19 03:34:18.629704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.223 [2024-04-19 03:34:18.629857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.629882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.630063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.630193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.630218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.630355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.630518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.630544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.630725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.630884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.630909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.631062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.631191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.631216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.631378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.631565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.631589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.631744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.631897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.631921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.632077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.632235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.632260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.632436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.632618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.632643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.632802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.632934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.632959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.633113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.633239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.633263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.633392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.633549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.633574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.633727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.633886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.633911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.634070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.634228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.634252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.634401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.634560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.634585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.634745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.634903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.634928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.635084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.635239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.635264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.635438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.635589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.635613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.635769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.635925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.635949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.636074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.636259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.636283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.636436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.636558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.636583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.636732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.636880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.636904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.637039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.637190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.637214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.637368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.637540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.637566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.637761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.637901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.637926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.638109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.638237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.638266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.638432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.638560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.638584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.224 [2024-04-19 03:34:18.638766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.638892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.224 [2024-04-19 03:34:18.638916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.224 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.639096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.639221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.639246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.639427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.639587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.639612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.639768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.639950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.639974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.640129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.640256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.640280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.640436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.640589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.640614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.640765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.640923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.640947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.641072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.641222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.641246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.641386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.641539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.641568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.641698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.641823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.641848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.642001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.642128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.642152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.642284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.642436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.642461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.642595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.642727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.642751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.642939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.643065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.643090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.643254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.643409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.643435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.643593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.643747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.643771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.643923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.644103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.644128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.644307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.644500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.644525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.644659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.644848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.644873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.645070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.645228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.645253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.645408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.645592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.645617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.645749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.645928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.645952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.646103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.646258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.646282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.646419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.646606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.646631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.646786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.646963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.646987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.647143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.647272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.647296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.647455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.647643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.647667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.225 [2024-04-19 03:34:18.647848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.647971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.225 [2024-04-19 03:34:18.647996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.225 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.648180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.648327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.648351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.648531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.648665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.648690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.648842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.649023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.649048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.649171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.649329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.649354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.649526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.649658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.649683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.649840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.650021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.650046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.650173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.650331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.650355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.650546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.650673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.650698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.650879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.651029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.651053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.651190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.651316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.651340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.651505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.651660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.651686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.651842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.652004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.652029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.652187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.652339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.652363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.652537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.652718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.652743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.652902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.653083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.653107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.653265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.653420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.653446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.653601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.653784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.653808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.653965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.654244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.654548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.654838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.654993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.655017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.655200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.655364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.655395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.655534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.655662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.655686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.655868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.656019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.656043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.656198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.656354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.656388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.656555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.656688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.226 [2024-04-19 03:34:18.656713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.226 qpair failed and we were unable to recover it. 00:20:41.226 [2024-04-19 03:34:18.656895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.657082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.657106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.657288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.657442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.657468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.657653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.657832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.657856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.658035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.658184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.658209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.658394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.658545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.658570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.658724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.658883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.658912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.659039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.659169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.659193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.659375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.659539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.659563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.659750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.659882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.659908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.660090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.660250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.660274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.660423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.660551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.660577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.660739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.660893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.660918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.661070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.661254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.661279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.661454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.661639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.661663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.661798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.661953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.661979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.662130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.662286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.662311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.662500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.662627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.662651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.662782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.662939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.662964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.663091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.663250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.663275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.663406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.663561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.663585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.663709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.663844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.663868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.664054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.664210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.664235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.664426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.664560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.664585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.227 qpair failed and we were unable to recover it. 00:20:41.227 [2024-04-19 03:34:18.664767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.664892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.227 [2024-04-19 03:34:18.664916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.665070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.665232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.665257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.665412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.665567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.665592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.665728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.665861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.665885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.666019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.666183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.666207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.666400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.666557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.666582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.666742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.666890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.666915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.667070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.667203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.667227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.667361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.667508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.667533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.667684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.667839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.667864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.668047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.668230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.668255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.668448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.668604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.668628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.668810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.668967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.668991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.669131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.669281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.669306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.669443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.669626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.669650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.669805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.669934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.669958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.670137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.670270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.670296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.670454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.670608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.670633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.670769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.670902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.670926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.671057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.671179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.671204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.671393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.671530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.671554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.671685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.671836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.671861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.672046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.672203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.672227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.228 qpair failed and we were unable to recover it. 00:20:41.228 [2024-04-19 03:34:18.672412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.228 [2024-04-19 03:34:18.672572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.672598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.672765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.672921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.672946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.673080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.673246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.673271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.673415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.673571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.673596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.673763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.673930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.673956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.674117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.674253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.674278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.674403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.674540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.674564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.674701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.674884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.674909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.675075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.675258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.675283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.675420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.675544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.675569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.675730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.675910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.675940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.676124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.676255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.676279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.676445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.676574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.676599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.676757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.676939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.676964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.677122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.677262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.677286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.677416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.677537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.677561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.677721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.677874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.677900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.678054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.678209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.678234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.678392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.678545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.678569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.678735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.678895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.678919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.679077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.679203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.679228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.679358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.679518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.679543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.679719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.679871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.679895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.680022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.680178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.680202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.680358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.680522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.680548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.680702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.680876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.680900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.681059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.681237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.681262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.229 qpair failed and we were unable to recover it. 00:20:41.229 [2024-04-19 03:34:18.681422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.229 [2024-04-19 03:34:18.681555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.681580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.681700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.681834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.681860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.681998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.682150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.682175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.682332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.682486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.682511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.682652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.682815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.682840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.682996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.683121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.683145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.683294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.683429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.683455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.683614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.683753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.683778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.683959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.684142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.684166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.684321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.684455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.684480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.684615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.684791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.684815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.684970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.685130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.685154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.685281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.685438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.685463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.685590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.685728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.685754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.685933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.686061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.686086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.686238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.686411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.686440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.686607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.686771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.686796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.686955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.687116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.687140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.687277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.687455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.687480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.687649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.687778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.687802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.687969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.688129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.688153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.688289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.688455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.688481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.688617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.688778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.688802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.688932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.689103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.230 [2024-04-19 03:34:18.689128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.230 qpair failed and we were unable to recover it. 00:20:41.230 [2024-04-19 03:34:18.689264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.689430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.689456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.689611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.689772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.689797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.689930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.690064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.690088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.690262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.690396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.690421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.690579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.690709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.690733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.690877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.691203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.691521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.691821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.691998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.692174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.692504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.692796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.692973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.693107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.693239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.693264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.693401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.693545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.693569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.693729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.693857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.693881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.694008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.694161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.694185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.694368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.694513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.694538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.694671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.694805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.694829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.694963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.695121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.695147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.695326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.695508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.695534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.695663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.695821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.695850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.696001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.696182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.696206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.696366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.696513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.696538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.696708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.696874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.696898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.697032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.697186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.697211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.697367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.697589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.697615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.231 [2024-04-19 03:34:18.697757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.697913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.231 [2024-04-19 03:34:18.697938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.231 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.698095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.698277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.698301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.698465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.698656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.698680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.698843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.699158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.699501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.699840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.699996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.700174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.700545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.700817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.700995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.701149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.701280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.701304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.701441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.701585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.701610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.701748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.701901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.701925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.702085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.702213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.702238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.702396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.702547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.702571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.702719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.702872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.702896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.703057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.703216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.703240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.703363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.703523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.703548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.703710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.703842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.703866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.704024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.704191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.704215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.704375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.704559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.704583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.704739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.704863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.704887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.705012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.705185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.705210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.705365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.705543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.705568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.705702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.705828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.705853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.706010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.706173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.706198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.706354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.706543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.706568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.232 qpair failed and we were unable to recover it. 00:20:41.232 [2024-04-19 03:34:18.706726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.232 [2024-04-19 03:34:18.706858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.706883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.707067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.707193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.707217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.707349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.707493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.707519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.707670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.707830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.707855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.708009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.708165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.708189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.708346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.708526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.708551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.708700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.708857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.708881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.709013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.709140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.709164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.709322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.709461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.709487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.709609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.709776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.709800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.709931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.710058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.710082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.710212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.710369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.710401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.710542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.710680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.710705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.710861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.711221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.711550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.711834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.711986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.712146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.712300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.712324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.712485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.712638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.712670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.233 qpair failed and we were unable to recover it. 00:20:41.233 [2024-04-19 03:34:18.712830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.712988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.233 [2024-04-19 03:34:18.713013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.713165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.713316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.713341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.713498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.713679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.713704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.713862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.713998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.714022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.714175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.714333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.714358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.714502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.714656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.714680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.714831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.714986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.715010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.715142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.715319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.715344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.715518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.715674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.715698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.715857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.716028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.716053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.716210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.716366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.716397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.716552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.716688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.716714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.716869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.717026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.717050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.717214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.717396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.717432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.717595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.717723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.717748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.717875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.718001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.718025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.718175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.718300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.718325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.718491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.718651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.718676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.718836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.719165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.719480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.719815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.719970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.720092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.720222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.720246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.720371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.720535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.720559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.720730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.720863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.720887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.721011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.721162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.721187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.234 qpair failed and we were unable to recover it. 00:20:41.234 [2024-04-19 03:34:18.721345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.721550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.234 [2024-04-19 03:34:18.721576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.721713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.721902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.721926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.722056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.722209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.722233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.722394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.722546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.722571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.722704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.722890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.722914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.723041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.723193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.723218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.723354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.723494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.723520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.723653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.723775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.723799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.723952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.724108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.724132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.724257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.724417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.724442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.724565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.724754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.724778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.724934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.725121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.725145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.725277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.725431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.725456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.725590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.725721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.725745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.725894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.726199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.726528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.726837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.726990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.727145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.727295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.727320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.727447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.727598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.727623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.727750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.727903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.727927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.728083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.728234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.728258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.728396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.728580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.728605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.728735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.728866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.728890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.729042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.729175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.729203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.729336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.729483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.729509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.235 [2024-04-19 03:34:18.729690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.729854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.235 [2024-04-19 03:34:18.729878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.235 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.730009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.730165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.730190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.730339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.730469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.730494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.730650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.730806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.730830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.730987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.731140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.731164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.731348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.731509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.731535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.731683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.731837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.731861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.731986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.732112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.732136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.732295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.732435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.732461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.732608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.732779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.732804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.732966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.733098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.733123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.733280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.733463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.733489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.733677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.733830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.733855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.734007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.734137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.734162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.734343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.734504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.734529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.734687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.734837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.734861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.735009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.735165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.735190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.735389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.735562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.735587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.735742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.735902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.735927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.736066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.736228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.736252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.736393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.736531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.736555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.736682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.736835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.736859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.737038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.737171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.737195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.737346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.737524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.737550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.737687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.737848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.737872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.738002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.738161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.738186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.738314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.738473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.738499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.738633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.738792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.236 [2024-04-19 03:34:18.738817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.236 qpair failed and we were unable to recover it. 00:20:41.236 [2024-04-19 03:34:18.738948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.739117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.739141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.739293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.739431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.739456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.739584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.739709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.739734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.739896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.740025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.740049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.740212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.740366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.740398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.740584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.740758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.740783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.740910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.741218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.741515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.741831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.741986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.742116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.742271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.742295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.742450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.742588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.742613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.237 qpair failed and we were unable to recover it. 00:20:41.237 [2024-04-19 03:34:18.742770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.742922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.237 [2024-04-19 03:34:18.742947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.743073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.743206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.743231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.743365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.743501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.743527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.743696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.743827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.743851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.744005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.744129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.744153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.744289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.744415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.744450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.744584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.744717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.744742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.744921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.745072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.745096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.745231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.745422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.745450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.745609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.745762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.745793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.745935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.746241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.746541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.746863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.746992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.747147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.747447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.747775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.747930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.748089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.748241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.748265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.513 [2024-04-19 03:34:18.748396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.748580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.513 [2024-04-19 03:34:18.748605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.513 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.748734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.748859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.748883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.749024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.749152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.749176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.749312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.749466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.749492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.749628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.749782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.749806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.749933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.750060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.750084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.750264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.750394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.750419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.750550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.750735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.750760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.750943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.751074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.751099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.751262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.751394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.751427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.751623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.751792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.751816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.751950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.752100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.752125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.752281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.752415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.752462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.752626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.752756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.752780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.752913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.753195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.753564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.753838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.753987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.754151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.754470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.754761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.754944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.755066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.755193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.755217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.755347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.755508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.755533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.755665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.755813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.755837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.755975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.756109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.756136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.756272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.756432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.756457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.756609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.756773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.756797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.756952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.757113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.757139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.757264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.757390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.757416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.514 qpair failed and we were unable to recover it. 00:20:41.514 [2024-04-19 03:34:18.757603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.757723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.514 [2024-04-19 03:34:18.757748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.757903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.758235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.758549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.758846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.758998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.759160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.759440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.759759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.759969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.760123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.760252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.760276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.760455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.760595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.760621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.760778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.760965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.760990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.761155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.761307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.761332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.761466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.761624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.761649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.761817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.761982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.762012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.762198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.762353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.762378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.762554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.762711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.762736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.762867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.763202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.763491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.763804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.763971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.764131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.764320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.764344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.764494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.764620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.764644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.764774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.764904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.764929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.765060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.765212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.765240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.765410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.765570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.765595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.765728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.765883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.765908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.766088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.766212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.766236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.766395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.766551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.766576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.766732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.766892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.766917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.515 [2024-04-19 03:34:18.767095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.767245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.515 [2024-04-19 03:34:18.767269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.515 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.767392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.767547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.767572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.767737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.767897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.767921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.768086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.768214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.768238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.768394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.768551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.768575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.768711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.768844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.768868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.768997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.769131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.769155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.769309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.769462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.769488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.769642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.769795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.769819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.769959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.770139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.770164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.770304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.770457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.770482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.770662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.770837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.770861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.770994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.771187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.771212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.771369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.771535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.771560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.771688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.771846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.771870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.772031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.772186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.772211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.772343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.772513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.772539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.772704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.772851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.772875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.772998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.773152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.773176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.773308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.773445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.773471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.773603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.773765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.773790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.773908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.774064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.774088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.774225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.774353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.774377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.774545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.774666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.774691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.774845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.775164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.775481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.775786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.775967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.776095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.776242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.776267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.516 [2024-04-19 03:34:18.776423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.776577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.516 [2024-04-19 03:34:18.776602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.516 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.776730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.776885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.776909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.777035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.777195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.777220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.777373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.777533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.777558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.777714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.777868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.777893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.778046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.778170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.778194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.778323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.778485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.778511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.778670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.778798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.778823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.778977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.779136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.779160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.779318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.779449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.779475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.779636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.779793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.779817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.779944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.780101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.780126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.780256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.780392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.780417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.780578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.780702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.780727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.780882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.781055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.781080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.781241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.781400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.781426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.781583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.781741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.781770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.781929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.782060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.782085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.782239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.782401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.782427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.782586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.782775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.782800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.782960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.783125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.783150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.783309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.783462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.783488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.783659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.783815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.783840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.783999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.784176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.784200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.517 qpair failed and we were unable to recover it. 00:20:41.517 [2024-04-19 03:34:18.784346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.517 [2024-04-19 03:34:18.784479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.784505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.784633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.784788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.784812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.784967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.785138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.785163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.785349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.785487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.785513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.785670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.785843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.785868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.785998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.786180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.786204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.786361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.786498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.786523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.786654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.786808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.786832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.786988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.787168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.787193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.787350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.787487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.787512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.787646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.787773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.787798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.787970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.788157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.788181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.788341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.788478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.788503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.788637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.788786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.788810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.788939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.789069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.789093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.789258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.789415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.789441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.789598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.789728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.789753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.789898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.790053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.790077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.790215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.790342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.790368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.790561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.790717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.790742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.790901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.791054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.791079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.791262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.791421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.791447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.791579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.791745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.791770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.791904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.792076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.792101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.792261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.792416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.792441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.792599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.792737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.792761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.792913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.793069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.793093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.793245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.793395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.793421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.793582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.793738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.518 [2024-04-19 03:34:18.793763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.518 qpair failed and we were unable to recover it. 00:20:41.518 [2024-04-19 03:34:18.793883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.794235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.794558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.794870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.794998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.795022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.795206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.795339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.795364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.795553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.795679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.795703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.795887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.796058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.796083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.796207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.796360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.796391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.796527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.796705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.796729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.796911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.797059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.797083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.797237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.797411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.797437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.797590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.797723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.797747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.797908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.798065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.798089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.798272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.798398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.798423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.798563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.798691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.798720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.798875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.799024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.799049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.799229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.799389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.799414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.799573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.799765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.799790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.799938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.800101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.800125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.800245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.800376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.800409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.800589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.800768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.800793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.800957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.801112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.801136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.801281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.801441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.801466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.801625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.801783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.801807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.801963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.802122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.802147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.802279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.802414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.802440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.802596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.802748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.802773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.802925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.803051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.803077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.803232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.803364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.803394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.519 qpair failed and we were unable to recover it. 00:20:41.519 [2024-04-19 03:34:18.803576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.519 [2024-04-19 03:34:18.803710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.803735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.803919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.804095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.804120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.804279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.804456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.804482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.804635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.804787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.804814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.804974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.805143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.805167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.805327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.805451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.805477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.805637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.805797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.805821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.805982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.806132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.806156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.806283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.806414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.806439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.806589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.806739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.806764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.806935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.807089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.807114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.807269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.807430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.807455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.807611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.807771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.807797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.807934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.808098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.808123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.808318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.808443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.808469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.808615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.808775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.808799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.808983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.809135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.809160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.809320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.809448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.809474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.809632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.809760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.809785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.809956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.810138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.810162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.810321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.810452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.810477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.810636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.810816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.810840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.810969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.811128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.811152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.811307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.811446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.811471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.811655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.811836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.811861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.812017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.812252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.812277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.812407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.812544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.812569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.812805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.812928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.812952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.813114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.813248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.813272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.520 qpair failed and we were unable to recover it. 00:20:41.520 [2024-04-19 03:34:18.813451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.520 [2024-04-19 03:34:18.813573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.813597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.813729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.813885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.813909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.814041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.814195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.814220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.814345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.814518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.814543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.814676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.814827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.814852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.815006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.815239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.815264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.815501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.815638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.815664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.815824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.815974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.816003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.816161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.816322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.816347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.816536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.816697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.816721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.816872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.817030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.817054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.817239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.817410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.817436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.817673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.817832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.817857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.818016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.818177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.818201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.818358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.818524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.818549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.818731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.818884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.818909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.819067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.819220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.819245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.819402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.819554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.819583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.819739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.819923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.819947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.820105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.820259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.820283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.820436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.820593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.820619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.820782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.820914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.820938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.821095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.821245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.821269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.821427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.821585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.821610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.821735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.821916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.821940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.822100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.822257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.822281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.822415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.822535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.822560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.822725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.822905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.822930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.823172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.823328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.823353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.521 qpair failed and we were unable to recover it. 00:20:41.521 [2024-04-19 03:34:18.823490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.521 [2024-04-19 03:34:18.823649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.823674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.823833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.823990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.824015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.824147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.824304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.824328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.824511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.824644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.824669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.824832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.824989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.825013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.825165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.825298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.825322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.825490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.825615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.825640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.825796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.826031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.826056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.826183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.826335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.826359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.826517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.826669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.826693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.826875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.827182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.827525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.827823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.827980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.828140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.828324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.828349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.828512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.828671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.828695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.828851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.829029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.829054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.829191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.829374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.829406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.829569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.829748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.829772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.829929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.830087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.830112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.830295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.830418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.830444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.830598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.830749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.830774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.830902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.831050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.831074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.831253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.831431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.831456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.831633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.831809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.522 [2024-04-19 03:34:18.831833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.522 qpair failed and we were unable to recover it. 00:20:41.522 [2024-04-19 03:34:18.831962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.832114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.832138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.832290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.832448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.832473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.832634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.832796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.832820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.832965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.833098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.833122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.833302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.833461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.833487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.833669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.833849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.833873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.834031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.834214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.834239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.834423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.834556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.834580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.834738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.834917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.834941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.835098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.835281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.835306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.835461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.835601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.835626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.835785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.835914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.835939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.836094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.836275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.836300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.836460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.836612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.836636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.836828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.836964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.836993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.837123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.837280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.837304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.837464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.837649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.837673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.837833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.837963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.837987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.838167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.838324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.838349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.838543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.838699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.838724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.838852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.839003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.839028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.839212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.839368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.839400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.839561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.839718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.839742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.839897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.840051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.840075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.840229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.840393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.840418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.840575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.840756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.840780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.840938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.841092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.841117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.841268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.841449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.841475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.841610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.841786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.523 [2024-04-19 03:34:18.841810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.523 qpair failed and we were unable to recover it. 00:20:41.523 [2024-04-19 03:34:18.841947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.842127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.842152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.842314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.842484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.842510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.842668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.842852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.842877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.843038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.843187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.843211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.843408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.843541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.843566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.843745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.843923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.843947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.844105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.844234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.844259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.844398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.844552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.844577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.844761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.844946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.844971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.845153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.845392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.845418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.845573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.845756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.845780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.845936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.846094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.846119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.846273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.846417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.846443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.846598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.846757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.846782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.846944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.847096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.847121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.847271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.847410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.847436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.847568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.847728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.847754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.847907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.848086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.848111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.848269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.848428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.848454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.848612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.848764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.848788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.849021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.849205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.849230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.849412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.849565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.849589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.849748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.849904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.849930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.850064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.850198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.850223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.850373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.850516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.850540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.850723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.850884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.850908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.851066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.851228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.851253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.851390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.851547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.851572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.851692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.851924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.851948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.524 qpair failed and we were unable to recover it. 00:20:41.524 [2024-04-19 03:34:18.852107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.524 [2024-04-19 03:34:18.852288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.852313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.852549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.852710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.852735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.852920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.853073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.853097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.853263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.853416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.853442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.853571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.853707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.853732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.853890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.854046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.854070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.854226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.854340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.854365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.854546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.854669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.854698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.854878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.855060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.855084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.855216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.855369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.855400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.855528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.855707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.855731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.855857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.856091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.856115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.856270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.856458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.856494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.856641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.856797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.856821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.856975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.857134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.857159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.857314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.857472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.857497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.857624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.857804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.857829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.857980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.858134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.858159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.858313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.858471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.858496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.858731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.858859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.858883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.859063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.859195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.859220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.859404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.859575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.859600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.859756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.859933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.859957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.860140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.860327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.860352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.860514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.860647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.860671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.860832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.860982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.861007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.861160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.861396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.861421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.861579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.861739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.861764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.861952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.862107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.862132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.525 qpair failed and we were unable to recover it. 00:20:41.525 [2024-04-19 03:34:18.862318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.525 [2024-04-19 03:34:18.862487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.862512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.862669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.862823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.862847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.863007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.863139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.863165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.863324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.863458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.863483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.863615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.863739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.863763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.863942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.864125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.864150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.864305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.864436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.864461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.864618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.864797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.864822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.865007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.865163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.865188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.865349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.865490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.865515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.865650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.865833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.865857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.866041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.866212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.866236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.866412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.866538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.866563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.866734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.866916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.866941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.867112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.867291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.867315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.867451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.867605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.867629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.867760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.867937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.867962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.868135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.868330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.868355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.868496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.868680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.868705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.868834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.868975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.869000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.869158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.869312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.869337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.869501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.869685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.869710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.869865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.870018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.870042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.870168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.870350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.870374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.870520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.870655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.870679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.870856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.871040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.871064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.871214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.871396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.871422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.871578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.871715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.871741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.871923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.872043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.526 [2024-04-19 03:34:18.872068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.526 qpair failed and we were unable to recover it. 00:20:41.526 [2024-04-19 03:34:18.872192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.872371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.872406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.872576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.872734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.872758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.872912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.873091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.873115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.873294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.873466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.873491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.873621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.873807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.873831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.873988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.874118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.874143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.874297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.874477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.874503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.874666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.874794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.874819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.874998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.875153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.875178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.875334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.875517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.875543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.875701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.875868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.875896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.876055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.876240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.876265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.876398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.876578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.876603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.876758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.876913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.876938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.877064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.877216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.877240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.877395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.877532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.877559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.877743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.877899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.877924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.878077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.878230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.878255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.878410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.878568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.878593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.878730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.878857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.878881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.879015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.879170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.879195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.879353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.879514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.879539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.879698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.879853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.879878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.880045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.880170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.880194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.880347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.880510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.880535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.880694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.880851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.880876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.881028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.881158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.881182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.528 qpair failed and we were unable to recover it. 00:20:41.528 [2024-04-19 03:34:18.881341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.528 [2024-04-19 03:34:18.881508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.881533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.881662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.881824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.881848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.881976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.882106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.882130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.882283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.882440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.882466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.882610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.882763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.882788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.882921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.883072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.883096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.883255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.883411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.883436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.883573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.883708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.883733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.883917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.884053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.884079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.884212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.884340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.884364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.884551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.884678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.884703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.884882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.885039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.885065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.885249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.885384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.885411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.885565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.885722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.885746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.885902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.886064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.886089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.886212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.886404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.886430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.886611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.886749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.886773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.886929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.887090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.887114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.887242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.887435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.887460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.887617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.887805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.887829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.887954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.888136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.888161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.888284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.888432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.888457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.888621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.888748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.888773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.888938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.889066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.889091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.889249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.889405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.889435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.889597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.889750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.889774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.889933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.890061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.890085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.890218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.890397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.890422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.890606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.890788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.890813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.529 qpair failed and we were unable to recover it. 00:20:41.529 [2024-04-19 03:34:18.890973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.529 [2024-04-19 03:34:18.891130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.891155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.891282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.891465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.891490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.891615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.891774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.891799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.891952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.892132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.892157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.892310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.892461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.892486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.892657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.892840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.892869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.893023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.893199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.893223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.893347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.893510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.893535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.893728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.893908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.893932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.894059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.894191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.894215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.894343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.894504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.894530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.894653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.894832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.894856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.894977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.895106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.895130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.895289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.895454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.895479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.895609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.895789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.895813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.895974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.896130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.896154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.896319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.896488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.896513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.896699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.896852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.896877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.897009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.897168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.897194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.897352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.897517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.897543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.897690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.897840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.897865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.897988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.898139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.898164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.898325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.898485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.898511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.898635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.898789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.898814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.898965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.899097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.899124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.899304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.899457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.899482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.899606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.899768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.899792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.899927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.900107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.900132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.900292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.900475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.900500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.530 qpair failed and we were unable to recover it. 00:20:41.530 [2024-04-19 03:34:18.900658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.530 [2024-04-19 03:34:18.900839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.900864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.901059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.901214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.901238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.901402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.901565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.901590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.901756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.901921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.901945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.902104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.902230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.902254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.902386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.902518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.902544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.902676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.902835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.902860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.903017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.903147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.903172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.903325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.903506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.903531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.903688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.903814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.903839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.903997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.904150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.904174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.904309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.904446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.904472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.904630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.904766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.904791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.904974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.905123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.905147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.905280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.905447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.905472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.905604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.905763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.905788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.905943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.906122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.906146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.906299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.906437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.906467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.906607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.906736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.906760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.906924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.907078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.907103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.907261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.907442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.907468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.907628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.907751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.907776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.907960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.908078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.908103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.908264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.908422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.908447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.908631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.908789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.908814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.908942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.909099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.909123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.909306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.909465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.909490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.909622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.909777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.909808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.909971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.910156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.910180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.531 qpair failed and we were unable to recover it. 00:20:41.531 [2024-04-19 03:34:18.910305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.910464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.531 [2024-04-19 03:34:18.910490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.910621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.910802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.910827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.911008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.911139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.911163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.911321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.911488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.911513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.911671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.911831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.911856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.912010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.912187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.912212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.912377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.912521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.912546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.912672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.912808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.912833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.912988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.913141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.913166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.913338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.913483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.913509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.913647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.913801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.913825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.913991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.914147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.914172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.914324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.914484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.914510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.914638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.914761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.914785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.914920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.915046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.915070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.915221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.915374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.915408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.915567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.915759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.915784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.915920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.916043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.916067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.916218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.916377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.916410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.916549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.916703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.916728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.916889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.917185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.917504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.917796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.917944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.918078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.918234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.918259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.532 qpair failed and we were unable to recover it. 00:20:41.532 [2024-04-19 03:34:18.918449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.918598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.532 [2024-04-19 03:34:18.918623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.918767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.918897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.918921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.919070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.919202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.919227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.919390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.919529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.919554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.919595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f41860 (9): Bad file descriptor 00:20:41.533 [2024-04-19 03:34:18.919830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.919981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.920171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.920478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.920769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.920953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.921085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.921243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.921270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.921457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.921593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.921619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.921777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.921917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.921943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.922106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.922295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.922320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.922481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.922614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.922649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.922806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.922965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.922990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.923151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.923332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.923358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.923507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.923669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.923695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.923841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.924167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.924478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.924813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.924999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.925159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.925485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.925781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.925971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.926104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.926269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.926294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.926442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.926577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.926604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.926746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.926879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.926903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.927039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.927190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.927216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.927358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.927513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.927538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.927679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.927839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.927864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.533 qpair failed and we were unable to recover it. 00:20:41.533 [2024-04-19 03:34:18.928023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.533 [2024-04-19 03:34:18.928177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.928202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.928342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.928474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.928501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.928631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.928782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.928807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.928946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.929264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.929564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.929854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.929980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.930164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.930509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.930847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.930999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.931156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.931308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.931334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.931494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.931636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.931662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.931818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.931978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.932140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.932467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.932771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.932957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.933148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.933281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.933307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.933464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.933648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.933675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.933836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.933998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.934160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.934472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.934796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.934980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.935112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.935269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.935295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.935431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.935593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.935619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.935802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.935962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.935988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.936129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.936295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.936322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.936465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.936603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.936630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.936807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.936965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.936991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.937150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.937283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.937309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.534 qpair failed and we were unable to recover it. 00:20:41.534 [2024-04-19 03:34:18.937493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.534 [2024-04-19 03:34:18.937661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.937689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.937852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.938176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.938479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.938775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.938988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.939119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.939280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.939307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.939474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.939626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.939652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.939798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.939937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.939964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.940094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.940263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.940291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.940434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.940565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.940591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.940761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.940918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.940946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.941075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.941215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.941241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.941405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.941539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.941565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.941694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.941851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.941878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.942012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.942149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.942175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.942319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.942491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.942518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.942668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.942803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.942831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.942992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.943140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.943166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.943332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.943488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.943515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.943680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.943838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.943865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.943997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.944132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.944160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.944322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.944461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.944488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.944644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.944795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.944821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.944960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.945093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.945120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.945257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.945427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.945454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.945631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.945791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.945828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.945983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.946118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.946144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.946280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.946435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.946462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.946597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.946729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.946756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.535 qpair failed and we were unable to recover it. 00:20:41.535 [2024-04-19 03:34:18.946911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.535 [2024-04-19 03:34:18.947043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.947069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.947222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.947354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.947393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.947531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.947689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.947716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.947881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.948228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.948546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.948868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.948995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.949154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.949459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.949747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.949905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.950036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.950194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.950221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.950352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.950517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.950544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.950678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.950840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.950866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.951003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.951135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.951161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.951320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.951474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.951501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.951641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.951801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.951828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.951967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.952103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.952130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.952257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.952452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.952480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.952609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.952743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.952771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.952910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.953047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.953073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.953228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.953390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.953417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.953574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.953706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.953732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.953922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.954215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.954536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.954815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.954980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.955118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.955273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.955300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.955466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.955619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.955651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.955841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.956001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.956027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.536 qpair failed and we were unable to recover it. 00:20:41.536 [2024-04-19 03:34:18.956186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.536 [2024-04-19 03:34:18.956320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.956346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.956524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.956677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.956704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.956840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.957005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.957032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.957173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.957332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.957358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.957542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.957700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.957726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.957886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.958202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.958520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.958800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.958961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.959090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.959275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.959302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.959429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.959568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.959594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.959747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.959906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.959933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.960060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.960212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.960238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.960403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.960574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.960600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.960764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.960894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.960921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.961104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.961256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.961283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.961468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.961623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.961649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.961806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.961968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.961994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.962159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.962289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.962322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.962452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.962594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.962620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.962748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.962904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.962930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.963114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.963294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.963320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.963453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.963589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.963617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.963779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.963915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.963941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.964102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.964238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.537 [2024-04-19 03:34:18.964264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.537 qpair failed and we were unable to recover it. 00:20:41.537 [2024-04-19 03:34:18.964430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.964585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.964612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.964746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.964935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.964961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.965125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.965255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.965281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.965441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.965571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.965603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.965740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.965901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.965928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.966090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.966226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.966253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.966385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.966531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.966557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.966709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.966839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.966867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.967022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.967183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.967210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.967387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.967547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.967573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.967733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.967916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.967942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.968073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.968200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.968227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.968395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.968548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.968575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.968786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.968950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.968977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.969110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.969268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.969293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.969436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.969588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.969615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.969741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.969926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.969953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.970117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.970279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.970305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.970448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.970586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.970612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.970742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.970903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.970930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.971057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.971190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.971216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.971342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.971510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.971537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.971675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.971839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.971866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.972029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.972187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.972213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.972345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.972487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.972514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.972669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.972830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.972857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.973015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.973175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.973202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.973360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.973525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.973552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.538 [2024-04-19 03:34:18.973685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.973836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.538 [2024-04-19 03:34:18.973863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.538 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.974024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.974161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.974188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.974329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.974467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.974495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.974632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.974794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.974821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.974978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.975173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.975200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.975339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.975494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.975522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.975712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.975897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.975924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.976113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.976261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.976287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.976458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.976587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.976613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.976743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.976904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.976931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.977065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.977193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.977219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.977372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.977502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.977529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.977684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.977844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.977871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.978021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.978179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.978206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.978364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.978526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.978554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.978714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.978876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.978903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.979064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.979225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.979252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.979406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.979539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.979566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.979729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.979918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.979944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.980079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.980259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.980286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.980407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.980538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.980566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.980699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.980853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.980879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.981013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.981144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.981169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.981298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.981434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.981462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.981623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.981812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.981839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.982023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.982178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.982204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.982345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.982535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.982562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.982723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.982861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.982888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.983029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.983229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.983255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.539 [2024-04-19 03:34:18.983392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.983521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.539 [2024-04-19 03:34:18.983547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.539 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.983672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.983800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.983826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.983976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.984109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.984137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.984305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.984438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.984467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.984620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.984751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.984778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.984907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.985062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.985088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.985256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.985396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.985423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.985560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.985687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.985713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.985844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.986011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.986038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.986192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.986323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.986351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.986511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.986673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.986701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.986826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.987008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.987035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.987195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.987386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.987413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.987539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.987694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.987720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.987883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.988067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.988093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.988224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.988376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.988409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.988595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.988759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.988787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04a8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.988990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.989144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.989175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.989340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.989523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.989551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.989716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.989900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.989927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.990086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.990242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.990269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.990461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.990602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.990629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.990772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.990906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.990933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.991097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.991267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.991294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.991470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.991632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.991659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.991831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.992013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.992039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.992206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.992366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.992399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.992593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.992769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.992798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.992939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.993114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.993141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.540 [2024-04-19 03:34:18.993300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.993469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.540 [2024-04-19 03:34:18.993498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.540 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.993634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.993795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.993821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.993978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.994136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.994163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.994344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.994515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.994543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.994681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.994836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.994865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.995021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.995205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.995233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.995367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.995509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.995536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.995696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.995857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.995883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.996048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.996188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.996215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.996406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.996597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.996624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.996780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.996963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.996990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.997173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.997354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.997386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.997546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.997704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.997731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.997900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.998071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.998097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.998253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.998390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.998417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.998547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.998685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.998712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.998896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.999054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.999081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.999241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.999404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.999432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.999593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.999724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:18.999752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:18.999910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.000069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.000096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:19.000264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.000400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.000429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:19.000563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.000722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.000750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:19.000912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.001076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.001104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:19.001248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.001407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.001435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:19.001563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.001719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.001746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.541 qpair failed and we were unable to recover it. 00:20:41.541 [2024-04-19 03:34:19.001905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.541 [2024-04-19 03:34:19.002066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.002092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.002256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.002436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.002464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.002653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.002784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.002812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.002970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.003150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.003176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.003340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.003500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.003528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.003666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.003819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.003845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.004014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.004175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.004202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.004333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.004494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.004521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.004653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.004777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.004803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.004991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.005177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.005204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.005330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.005496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.005523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.005657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.005816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.005843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.005967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.006125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.006151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.006307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.006469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.006500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.006656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.006822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.006848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.006985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.007180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.007206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.007368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.007538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.007565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.007750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.007910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.007937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.008122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.008276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.008302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.008459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.008620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.008648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.008837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.008973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.009001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.009156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.009313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.009339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.009479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.009609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.009637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.009818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.010005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.010037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.010200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.010390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.010418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.010604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.010736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.010763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.010924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.011111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.011138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.011323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.011488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.011516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.011705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.011835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.011862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.542 qpair failed and we were unable to recover it. 00:20:41.542 [2024-04-19 03:34:19.012000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.542 [2024-04-19 03:34:19.012134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.012161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.012298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.012461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.012488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.012646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.012828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.012854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.013012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.013195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.013221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.013379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.013560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.013591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.013778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.013961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.013988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.014114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.014280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.014306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.014443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.014605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.014634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.014794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.014986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.015012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.015173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.015336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.015363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.015531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.015691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.015718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.015858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.016019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.016046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.016207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.016365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.016399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.016589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.016722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.016748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.016906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.017066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.017098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.017286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.017446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.017474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.017621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.017779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.017806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.017992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.018153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.018179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.018336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.018508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.018536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.018692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.018854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.018881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.019036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.019191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.019218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.019377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.019536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.019563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.019716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.019875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.019902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.020082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.020243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.020270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.020416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.020580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.020607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.020770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.020955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.020982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.021169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.021350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.021376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.021548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.021679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.021706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.021891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.022017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.543 [2024-04-19 03:34:19.022043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.543 qpair failed and we were unable to recover it. 00:20:41.543 [2024-04-19 03:34:19.022175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.022333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.022362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.022508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.022669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.022696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.022857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.022994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.023020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.023147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.023302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.023329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.023492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.023650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.023677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.023858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.024017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.024044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.024233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.024373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.024408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.024556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.024748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.024775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.024929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.025083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.025110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.025287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.025421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.025449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.025589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.025743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.025770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.025887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.026044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.026071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.026205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.026391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.026419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.026575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.026738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.026765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.026947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.027073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.027101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.027285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.027441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.027468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.027634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.027789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.027817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.027984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.028140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.028167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.028321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.028489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.028517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.028672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.028825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.028852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.029038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.029197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.029223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.029389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.029547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.029574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.029723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.029910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.029937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.030096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.030220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.030246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.030408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.030540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.030567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.030700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.030887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.030914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.031071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.031234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.031260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.031419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.031578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.031604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.031764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.031925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.544 [2024-04-19 03:34:19.031951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.544 qpair failed and we were unable to recover it. 00:20:41.544 [2024-04-19 03:34:19.032111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.032291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.032317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.032476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.032609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.032637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.032795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.032955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.032981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.033142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.033300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.033327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.033461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.033618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.033645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.033804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.033943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.033979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.034105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.034269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.034295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.034485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.034641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.034669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.034806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.034994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.035022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.035185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.035342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.035368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.035555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.035738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.035764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.036025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.036241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.036267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.036423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.036582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.036609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.036769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.036932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.036959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.037098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.037290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.037464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.037624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.037651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.037838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.038097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.038123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.038267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.038402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.038429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.038585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.038773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.038800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.038964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.039128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.039165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.039326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.039472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.039500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.039686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.039869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.039896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.040064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.040221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.040247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.040421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.040580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.040607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.040741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.040899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.040927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.041087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.041293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.041321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.041485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.041622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.041649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.041809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.041991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.042017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.545 [2024-04-19 03:34:19.042174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.042331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.545 [2024-04-19 03:34:19.042367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.545 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.042543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.042695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.042722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.042882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.043039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.043065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.043218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.043344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.043372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.043532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.043694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.043721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.043856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.044012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.044040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.044185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.044350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.044377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.044591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.044720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.044748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.044927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.045058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.045084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b0000b90 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.045286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.045470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.045501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.045661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.045823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.045849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.046010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.046173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.046199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.046362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.046534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.046561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.046698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.046867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.046892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.047023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.047182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.047208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.047405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.047544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.047570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.047722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.047908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.047933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.048093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.048273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.048299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.048492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.048649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.048675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.048809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.048997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.049023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.049179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.049361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.049395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.049590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.049781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.049807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.049976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.050135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.050162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.050322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.050464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.050493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.050859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.051032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.051058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.546 qpair failed and we were unable to recover it. 00:20:41.546 [2024-04-19 03:34:19.051214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.546 [2024-04-19 03:34:19.051370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.051402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.051549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.051720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.051747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.051887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.052196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.052495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.052798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.052980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.053006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.053166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.053325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.053352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.053493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.053624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.053650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.547 [2024-04-19 03:34:19.053842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.053973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.547 [2024-04-19 03:34:19.054011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.547 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.054138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.054298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.054325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.054461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.054625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.054653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.054815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.054957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.054982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.055140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.055282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.055308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.055445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.055620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.055646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.055830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.055984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.056178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.056473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.056752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.056907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.057099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.057253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.057279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.057458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.057583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.057609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.057796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.057954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.057980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.058134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.058298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.058325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.058473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.058634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.058672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.058796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.058951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.058978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.059128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.059279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.059315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.059473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.059629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.059655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.059819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.060006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.060032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.060190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.060347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.060373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.060527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.060711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.060738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.823 qpair failed and we were unable to recover it. 00:20:41.823 [2024-04-19 03:34:19.060923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.823 [2024-04-19 03:34:19.061088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.061114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.061267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.061405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.061431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.061586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.061740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.061766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.061904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.062091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.062117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.062273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.062412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.062451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.062610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.062732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.062758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.062940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.063094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.063120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.063281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.063410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.063447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.063600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.063762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.063789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.063918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.064041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.064067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.064254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.064414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.064465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.064595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.064729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.064755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.064935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.065055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.065081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.065234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.065385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.065412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.065567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.065701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.065727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.065863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.066046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.066073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.066234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.066397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.066435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.066561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.066720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.066746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.066876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.067006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.067032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.067179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.067362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.067395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.067556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.067745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.067771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.067899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.068053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.068079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.068233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.068439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.068465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.068594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.068746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.068772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.068929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.069084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.069110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.069294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.069445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.069472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.069621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.069745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.069775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.069910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.070048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.070074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.070194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.070329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.070357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.824 qpair failed and we were unable to recover it. 00:20:41.824 [2024-04-19 03:34:19.070520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.824 [2024-04-19 03:34:19.070656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.070684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.070837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.070992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.071018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.071178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.071358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.071391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.071524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.071678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.071705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.071837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.071988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.072015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.072178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.072353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.072500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.072684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.072711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.072866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.072996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.073027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.073182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.073340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.073366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.073536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.073679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.073705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.073855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.074015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.074041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.074192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.074347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.074372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.074513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.074689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.074721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.074877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.075035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.075062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.075244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.075403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.075439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.075560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.075687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.075713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.075884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.076034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.076060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.076243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.076432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.076459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.076616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.076807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.076834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.076965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.077114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.077140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.077265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.077442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.077469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.077629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.077790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.077815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.077973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.078129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.078155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.078340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.078485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.078512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.078665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.078823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.078850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.079024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.079182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.079208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.079340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.079488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.079515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.079671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.079829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.079855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.079996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.080157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.080184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.825 qpair failed and we were unable to recover it. 00:20:41.825 [2024-04-19 03:34:19.080315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.080442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.825 [2024-04-19 03:34:19.080469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.080629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.080820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.080847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.080978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.081140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.081168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.081354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.081529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.081556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.081721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.081848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.081875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.082035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.082160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.082186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.082338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.082503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.082530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.082692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.082845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.082871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.083021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.083175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.083202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.083357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.083502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.083530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.083680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.083859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.083885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.084012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.084172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.084198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.084331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.084455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.084482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.084635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.084777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.084803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.084957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.085110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.085136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.085301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.085485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.085512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.085630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.085790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.085817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.085975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.086158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.086185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.086338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.086497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.086524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.086707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.086866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.086893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.087079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.087238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.087264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.087402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.087556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.087583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.087708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.087857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.087884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.088038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.088221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.088248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.088419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.088574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.088600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.088783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.088935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.088962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.089144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.089272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.089298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.089462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.089623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.089660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.089836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.089994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.090020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.826 qpair failed and we were unable to recover it. 00:20:41.826 [2024-04-19 03:34:19.090171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.090326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.826 [2024-04-19 03:34:19.090356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.090529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.090664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.090689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.090879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.091068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.091094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.091251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.091404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.091433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.091573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.091765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.091791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.091948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.092126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.092153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.092291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.092426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.092453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.092604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.092760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.092787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.092953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.093072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.093098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.093255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.093416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.093444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.093607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.093763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.093789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.093953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.094079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.094105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.094291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.094425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.094451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.094608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.094797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.094823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.095005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.095188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.095214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.095371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.095533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.095559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.095714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.095871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.095897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.096054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.096233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.096259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.096440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.096601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.096627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.096785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.096964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.096990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.097117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.097268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.097294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.097458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.097615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.097641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.097800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.097961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.097987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.827 qpair failed and we were unable to recover it. 00:20:41.827 [2024-04-19 03:34:19.098142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.098325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.827 [2024-04-19 03:34:19.098351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.098513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.098648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.098674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.098832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.099202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.099528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.099843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.099995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.100119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.100267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.100293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.100458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.100581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.100607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.100766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.100953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.100980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.101132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.101289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.101315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.101504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.101632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.101658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.101776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.101934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.101960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.102145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.102271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.102299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.102433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.102591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.102618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.102769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.102934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.102961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.103139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.103299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.103325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.103481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.103638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.103664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.103824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.103980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.104007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.104157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.104317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.104344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.104484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.104638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.104664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.104845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.105023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.105049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.105206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.105333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.105359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.105551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.105681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.105707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.105867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.106020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.106046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.106204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.106361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.106394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.106565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.106699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.106725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.828 qpair failed and we were unable to recover it. 00:20:41.828 [2024-04-19 03:34:19.106880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.828 [2024-04-19 03:34:19.107013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.107041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.107225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.107399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.107426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.107583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.107744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.107775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.107933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.108091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.108117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.108243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.108426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.108453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.108611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.108762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.108789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.108940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.109095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.109121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.109284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.109470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.109497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.109626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.109785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.109812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.109964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.110125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.110151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.110313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.110442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.110469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.110625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.110765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.110791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.110979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.111103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.111129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.111262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.111429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.111456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.111618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.111769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.111795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.111948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.112105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.112131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.112293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.112451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.112478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.112609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.112761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.112788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.112940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.113097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.113123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.113280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.113432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.113459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.113597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.113756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.113783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.113945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.114099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.114126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.114253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.114413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.114440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.114632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.114787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.114813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.114965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.115147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.115173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.115311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.115469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.115496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.115651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.115824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.115850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.116002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.116178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.116205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.116408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.116542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.116570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.829 qpair failed and we were unable to recover it. 00:20:41.829 [2024-04-19 03:34:19.116755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.829 [2024-04-19 03:34:19.116940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.116966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.117121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.117243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.117270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.117448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.117586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.117612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.117767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.117926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.117954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.118115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.118243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.118270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.118401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.118557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.118584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.118748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.118908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.118935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.119089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.119276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.119303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.119464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.119617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.119643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.119827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.119977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.120156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.120449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.120762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.120949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.121135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.121263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.121289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.121441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.121602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.121628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.121787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.122003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.122186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.122367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.122398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.122554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.122682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.122708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.122866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.123018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.123044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.123202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.123325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.123351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.123508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.123690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.123716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.123868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.124048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.124075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.124258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.124409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.124437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.124597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.124749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.124776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.124927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.125083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.125114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.125264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.125448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.125475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.125634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.125783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.830 [2024-04-19 03:34:19.125810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.830 qpair failed and we were unable to recover it. 00:20:41.830 [2024-04-19 03:34:19.125991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.126176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.126202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.126358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.126527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.126553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.126712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.126869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.126897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.127033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.127159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.127185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.127340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.127508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.127535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.127671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.127860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.127886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.128026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.128176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.128203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.128355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.128547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.128577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.128741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.128901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.128926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.129078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.129225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.129251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.129434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.129562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.129589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.129746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.129879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.129907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.130099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.130248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.130275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.130403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.130543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.130569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.130752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.130910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.130936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.131094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.131257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.131283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.131468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.131622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.131649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.131807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.131986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.132156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.132497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.132811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.132968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.133146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.133273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.133300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.133478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.133639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.133665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.133818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.133977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.134003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.134161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.134291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.134317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.134497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.134650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.134676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.134831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.134991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.135017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.135145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.135308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.831 [2024-04-19 03:34:19.135335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.831 qpair failed and we were unable to recover it. 00:20:41.831 [2024-04-19 03:34:19.135495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.135620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.135646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.135831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.136014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.136041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.136178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.136337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.136364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.136525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.136683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.136709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.136892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.137040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.137066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.137233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.137421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.137449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.137609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.137755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.137781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.137937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.138099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.138125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.138254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.138416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.138443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.138593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.138754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.138780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.138967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.139131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.139158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.139317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.139501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.139528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.139690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.139840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.139867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.140006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.140131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.140157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.140315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.140465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.140492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.140650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.140806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.140832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.140977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.141138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.141164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.141328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.141479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.141507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.141689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.141869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.141895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.142049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.142205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.142232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.142358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.142499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.142527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.142718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.142878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.142904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.143056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.143182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.143209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.143358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.143496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.832 [2024-04-19 03:34:19.143523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.832 qpair failed and we were unable to recover it. 00:20:41.832 [2024-04-19 03:34:19.143672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.143801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.143828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.143988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.144138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.144164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.144320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.144449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.144476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.144629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.144781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.144807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.144986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.145115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.145141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.145298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.145463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.145491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.145672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.145823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.145854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.146013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.146197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.146224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.146370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.146499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.146525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.146652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.146810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.146836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.146988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.147144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.147170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.147317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.147485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.147512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.147646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.147831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.147857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.148049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.148198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.148225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.148351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.148542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.148569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.148751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.148903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.148929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.149064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.149222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.149248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.149435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.149619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.149645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.149773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.149955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.149981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.150166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.150329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.150356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.150558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.150687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.150713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.150847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.151016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.151042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.151197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.151331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.151358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.151523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.151686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.151712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.151898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.152032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.152058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.833 qpair failed and we were unable to recover it. 00:20:41.833 [2024-04-19 03:34:19.152212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.833 [2024-04-19 03:34:19.152379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.152424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.152559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.152694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.152721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.152884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.153208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.153523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.153830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.153975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.154000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.154182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.154336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.154361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.154505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.154655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.154681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.154860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.155195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.155506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.155806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.155989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.156119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.156275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.156302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.156488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.156636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.156663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.156820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.156948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.156975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.157131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.157293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.157319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.157471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.157648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.157675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.157831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.157955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.157981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.158132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.158297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.158323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.158485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.158616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.158642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.158799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.158953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.158979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.159110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.159271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.159300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.159427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.159584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.159610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.159747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.159931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.159958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.160102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.160236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.160262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.160420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.160569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.160595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.160751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.160929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.160956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.161107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.161261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.161288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.161446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.161608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.161635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.834 qpair failed and we were unable to recover it. 00:20:41.834 [2024-04-19 03:34:19.161784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.161947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.834 [2024-04-19 03:34:19.161973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.162131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.162282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.162308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.162670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.162697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.162887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.163070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.163100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.163231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.163374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.163406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.163566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.163728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.163755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.163887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.164234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.164537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.164815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.164992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.165117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.165276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.165301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.165457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.165592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.165618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.165746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.165875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.165901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.166033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.166194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.166220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.166349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.166489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.166516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.166639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.166787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.166813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.166992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.167127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.167153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.167318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.167449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.167476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.167635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.167780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.167806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.167954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.168081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.168107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.168264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.168446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.168484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.168741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.168767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.168890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.169040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.169066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.169219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.169372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.169405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.169546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.169703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.169729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.169883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.170042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.170069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.835 [2024-04-19 03:34:19.170193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.170324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.835 [2024-04-19 03:34:19.170350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.835 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.170536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.170665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.170691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.170833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.170989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.171163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.171471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.171800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.171957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.172083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.172204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.172230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.172387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.172518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.172544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.172736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.172895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.172921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.173054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.173188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.173215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.173352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.173519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.173546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.173673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.173828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.173855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.173989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.174147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.174173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.174351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.174508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.174535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.174707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.174839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.174865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.175012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.175147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.175173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.175303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.175435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.175462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.175614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.175768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.175795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.175960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.176099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.176125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.176278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.176449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.176478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.176650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.176806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.176833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.177016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.177139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.177166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.177299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.177445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.177472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.177603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.177758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.177784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.177941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.178066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.178092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.178212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.178348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.178376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.178548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.178680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.178706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.178867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.179023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.179049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.179207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.179365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.179414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.179560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.179695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.836 [2024-04-19 03:34:19.179723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.836 qpair failed and we were unable to recover it. 00:20:41.836 [2024-04-19 03:34:19.179905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.180218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.180562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.180840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.180997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.181159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.181471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.181791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.181970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.182125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.182287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.182321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.182473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.182664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.182690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.182833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.182957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.182983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.183146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.183279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.183307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.183476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.183626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.183653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.183791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.183948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.183975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.184140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.184306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.184332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.184471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.184652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.184678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.184838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.184972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.184998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.185148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.185308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.185334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.185469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.185594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.185620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.185776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.185929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.185956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.186092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.186248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.186273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.186432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.186586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.186613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.186747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.186903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.186929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.187088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.187246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.187277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.187444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.187581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.837 [2024-04-19 03:34:19.187608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.837 qpair failed and we were unable to recover it. 00:20:41.837 [2024-04-19 03:34:19.187766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.187914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.187940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.188071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.188204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.188230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.188397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.188552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.188578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.188764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.188912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.188938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.189075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.189200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.189226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.189361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.189501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.189527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.189666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.189820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.189846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.189983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.190146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.190172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.190304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.190440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.190467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.190621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.190753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.190780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.190916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.191049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.191075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.191237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.191392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.191419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.191557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.191693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.191720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.191856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.192226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.192539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.192839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.192976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.193002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.193161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.193339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.193365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.193524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.193709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.193735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.193868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.194051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.194077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.194208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.194402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.194429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.194588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.194755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.194781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.194930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.195061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.195087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.195271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.195399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.195427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.195590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.195746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.195772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.195926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.196239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.196564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.196867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.196997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.197024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.838 qpair failed and we were unable to recover it. 00:20:41.838 [2024-04-19 03:34:19.197145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.838 [2024-04-19 03:34:19.197301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.197327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.197494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.197626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.197652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.197801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.197954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.197980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.198136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.198285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.198312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.198435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.198566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.198592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.198752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.198905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.198931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.199082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.199234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.199265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.199417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.199568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.199594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.199777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.199956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.199982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.200134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.200312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.200338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.200498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.200625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.200651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.200819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.200949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.200975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.201131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.201264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.201290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.201448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.201584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.201610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.201763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.201896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.201922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.202083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.202222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.202248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.202378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.202537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.202563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.202730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.202915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.202941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.203099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.203252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.203278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.203409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.203545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.203571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.203756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.203914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.203940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.204071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.204224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.204249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.204372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.204529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.204555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.204716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.204872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.204898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.205051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.205209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.205235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.205387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.205522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.205548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.205683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.205840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.205866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.205999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.206153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.206179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.206350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.206492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.206519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.839 qpair failed and we were unable to recover it. 00:20:41.839 [2024-04-19 03:34:19.206679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.839 [2024-04-19 03:34:19.206851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.206878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.207029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.207209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.207236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.207397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.207518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.207545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.207678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.207806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.207832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.207956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.208139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.208165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.208322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.208479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.208506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.208647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.208806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.208832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.208983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.209153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.209179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.209318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.209461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.209488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.209667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.209821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.209847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.209982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.210143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.210169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.210303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.210455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.210482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.210637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.210762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.210788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.210919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.211058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.211084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.211210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.211347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.211375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.211565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.211694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.211720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.211868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.212150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.212483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.212797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.212973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.213091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.213282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.213308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.213466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.213637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.213663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.213789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.213971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.213997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.214155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.214317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.214343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.214531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.214665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.214692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.214845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.214975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.215001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.215183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.215327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.215354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.215510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.215641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.215668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.215853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.216016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.840 [2024-04-19 03:34:19.216046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.840 qpair failed and we were unable to recover it. 00:20:41.840 [2024-04-19 03:34:19.216209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.216340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.216365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.216507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.216641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.216668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.216796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.216944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.216971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.217122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.217292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.217318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.217445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.217603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.217629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.217765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.217899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.217925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.218050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.218197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.218223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.218359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.218528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.218555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.218691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.218864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.218891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.219020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.219144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.219181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.219347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.219485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.219512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.219665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.219847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.219873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.220009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.220132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.220158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.220292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.220450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.220477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.220620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.220773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.220799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.220925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.221267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.221551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.221835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.221988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.222173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.222462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.222813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.222994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.223151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.223283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.223309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.223494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.223620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.223646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.223806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.223941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.223967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.224116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.224244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.224270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.224405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.224529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.224556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.224679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.224812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.224838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.224999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.225133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.841 [2024-04-19 03:34:19.225159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.841 qpair failed and we were unable to recover it. 00:20:41.841 [2024-04-19 03:34:19.225285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.225420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.225447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.225581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.225737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.225763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.225922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.226054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.226080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.226256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.226395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.226422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.226551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.226722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.226748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.226907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.227070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.227097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.227258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.227386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.227413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.227559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.227703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.227730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.227856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.228513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.228797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.228960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.229142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.229278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.229304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.229465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.229589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.229615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.229778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.229930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.229956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.230081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.230242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.230268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.230405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.230525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.230552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.230702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.230824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.230850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.231000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.231154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.231180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.231341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.231502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.231529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.231685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.231809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.231835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.232006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.232186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.232218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.232336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.232475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.232502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.232665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.232823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.232849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.842 qpair failed and we were unable to recover it. 00:20:41.842 [2024-04-19 03:34:19.233010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.233165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.842 [2024-04-19 03:34:19.233191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.233327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.233485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.233513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.233647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.233782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.233808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.233996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.234170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.234197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.234356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.234499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.234525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.234687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.234822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.234848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.235017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.235152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.235178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.235328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.235481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.235508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.235671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.235796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.235822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.235955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.236075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.236101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.236231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.236411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.236438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.236568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.236722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.236748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.236874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.237160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.237481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.237794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.237975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.238103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.238277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.238303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.238458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.238613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.238639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.238770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.238892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.238918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.239078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.239205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.239231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.239391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.239542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.239569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.239737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.239861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.239887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.240046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.240201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.240227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.240354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.240479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.240506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.240658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.240788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.240814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.240974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.241110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.241137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.241322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.241483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.241510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.241678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.241813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.241839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.242000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.242122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.242148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.843 [2024-04-19 03:34:19.242289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.242447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.843 [2024-04-19 03:34:19.242475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.843 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.242610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.242737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.242763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.242946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.243266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.243566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.243875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.243998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.244024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.244156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.244309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.244335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.244506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.244689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.244715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.244876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.245032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.245058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.245218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.245373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.245404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.245562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.245721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.245747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.245901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.246188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.246515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.246866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.246996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.247158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.247493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.247813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.247962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.248117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.248243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.248269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.248420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.248573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.248603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.248737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.248891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.248917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.249048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.249170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.249196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.249347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.249505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.249531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.249682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.249814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.249842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.250003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.250164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.250190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.250318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.250481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.250509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.250670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.250809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.250835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.250997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.251159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.251186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.251314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.251466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.251498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.251635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.251818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.844 [2024-04-19 03:34:19.251848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.844 qpair failed and we were unable to recover it. 00:20:41.844 [2024-04-19 03:34:19.251977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.252115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.252144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.252317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.252444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.252471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.252596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.252723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.252749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.252881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.253063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.253089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.253241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.253389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.253417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.253581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.253731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.253757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.253922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.254072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.254098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.254253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.254433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.254460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.254594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.254718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.254744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.254894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.255048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.255074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.255264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.255391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.255418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.255580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.255701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.255728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.255914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.256040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.256065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.256220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.256376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.256408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.256572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.256733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.256759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.256889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.257012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.257037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.257194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.257349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.257375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.257534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.257667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.257693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.257854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.258003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.258028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.258188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.258346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.258372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.258565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.258693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.258720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.258903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.259054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.259080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.259211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.259358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.259391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.259574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.259696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.259722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.259879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.260116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.260142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.260303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.260451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.260478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.260631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.260764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.260791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.260951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.261079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.261106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.261237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.261418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.261445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.845 qpair failed and we were unable to recover it. 00:20:41.845 [2024-04-19 03:34:19.261606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-04-19 03:34:19.261758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.261785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.261909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.262138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.262164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.262357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.262518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.262545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.262702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.262854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.262880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.263017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.263176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.263202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.263354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.263516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.263542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.263692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.263851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.263877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.263998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.264127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.264153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.264309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.264459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.264486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.264621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.264801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.264827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.264984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.265105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.265132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.265312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.265501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.265528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.265667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.265824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.265850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.265980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.266170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.266197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.266378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.266538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.266565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.266725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.266884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.266912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.267035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.267195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.267221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.267351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.267515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.267542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.267665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.267817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.267843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.268026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.268151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.268177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.268337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.268471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.268498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.268656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.268814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.268845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.269027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.269177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.269203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.269362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.269549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.269575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.269702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.269860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.269886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.846 qpair failed and we were unable to recover it. 00:20:41.846 [2024-04-19 03:34:19.270036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.846 [2024-04-19 03:34:19.270163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.270189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.270343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.270540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.270567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.270751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.270885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.270911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.271092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.271251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.271277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.271440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.271629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.271655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.271814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.271941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.271967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.272122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.272307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.272332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.272546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.272677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.272703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.272857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.273018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.273044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.273199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.273356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.273387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.273512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.273694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.273720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.273872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.274214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.274545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.274854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.274991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.275019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.275204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.275362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.275396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.275538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.275722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.275748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.275876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.276005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.276031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.276187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.276345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.276371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.276531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.276654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.276680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.276844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.277025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.277051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.277207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.277391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.277418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.277572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.277730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.277756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.277908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.278086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.278112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.278276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.278425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.278453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.278644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.278768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.278794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.278958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.279106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.279132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.279315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.279476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.279504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.279656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.279813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.279839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.847 [2024-04-19 03:34:19.280000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.280149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.847 [2024-04-19 03:34:19.280175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.847 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.280329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.280493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.280520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.280648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.280785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.280812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.280994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.281151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.281174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.281324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.281468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.281492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.281653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.281777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.281801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.281991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.282143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.282167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.282325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.282454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.282479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.282609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.282777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.282801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.282960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.283116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.283139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.283290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.283431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.283455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.283617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.283747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.283773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.283947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.284100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.284124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.284277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.284448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.284474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.284606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.284770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.284795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.284951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.285076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.285100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.285263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.285445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.285470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.285630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.285784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.285809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.285973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.286126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.286155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.286278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.286410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.286435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.286592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.286741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.286765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.286914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.287069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.287093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.287242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.287373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.287410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.287573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.287724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.287750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.287867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.288020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.288046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.288205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.288361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.288393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.288577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.288728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.288754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.288896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.289057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.289083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.289236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.289423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.289450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.289639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.289760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.848 [2024-04-19 03:34:19.289786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.848 qpair failed and we were unable to recover it. 00:20:41.848 [2024-04-19 03:34:19.289975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.290105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.290131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.290287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.290427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.290455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.290616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.290740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.290766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.290893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.291053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.291079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.291207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.291337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.291363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.291554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.291693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.291719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.291881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.292045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.292071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.292193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.292348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.292374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.292543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.292685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.292711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.292845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.293005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.293032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.293178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.293336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.293362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.293523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.293694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.293720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.293874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.294157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.294461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.294778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.294943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.295097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.295247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.295273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.295409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.295539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.295566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.295692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.295846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.295872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.296005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.296189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.296215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.296348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.296512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.296539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.296686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.296844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.296870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.297023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.297153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.297179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.297311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.297493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.297520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.297680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.297818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.297843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.849 [2024-04-19 03:34:19.297972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.298129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.849 [2024-04-19 03:34:19.298157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.849 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.298285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.298441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.298468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.298595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.298781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.298807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.298958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.299118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.299144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.299279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.299424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.299452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.299588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.299745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.299772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.299928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.300077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.300103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.300262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.300420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.300447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.300582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.300705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.300731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.300884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.301222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.301509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.301839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.301994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.302153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.302282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.302308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.302464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.302631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.302661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.302785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.302918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.302944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.303097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.303227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.303253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.303413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.303530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.303556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.303689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.303874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.303900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.304048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.304203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.304229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.304391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.304582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.304608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.304760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.304916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.304942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.305106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.305264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.305290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.305446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.305577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.305603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.305783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.305943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.305971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.306140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.306292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.306318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.306492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.306626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.306652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.306836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.306992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.307018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.307182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.307312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.307340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.850 [2024-04-19 03:34:19.307489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.307670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.850 [2024-04-19 03:34:19.307696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.850 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.307849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.308005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.308032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.308182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.308346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.308372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.308542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.308694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.308720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.308849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.309031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.309058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.309211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.309368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.309399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.309565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.309702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.309728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.309907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.310066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.310092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.310246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.310403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.310431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.310574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.310766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.310793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.310945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.311096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.311122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.311259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.311403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.311430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.311571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.311697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.311723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.311886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.312047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.312073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.312238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.312366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.312402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.312561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.312713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.312740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.312883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.313211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.313494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.313810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.313990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.314176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.314308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.314334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.314482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.314606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.314632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.314788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.314946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.314972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.315126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.315310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.851 [2024-04-19 03:34:19.315336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.851 qpair failed and we were unable to recover it. 00:20:41.851 [2024-04-19 03:34:19.315476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.315629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.315655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.315801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.315950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.315976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.316159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.316316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.316342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.316474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.316629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.316655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.316846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.316968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.316994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.317152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.317309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.317336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.317474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.317628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.317654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.317817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.317958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.317984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.318110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.318247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.318275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.318441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.318600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.318626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.318758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.318913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.318940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.319123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.319281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.319308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.319474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.319606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.319638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.319821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.319981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.320008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.320188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.320361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.320393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.320555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.320693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.320721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.320851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.321033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.321059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.321242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.321397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.321424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.321573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.321726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.321752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.321905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.322061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.322088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.322242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.322409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.322436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.322561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.322741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.322768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.322920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.323063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.323093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.323275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.323438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.323465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.323624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.323753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.323779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.323943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.324104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.324130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.324282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.324466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.324492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.324653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.324806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.324833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.324962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.325122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.325148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.852 qpair failed and we were unable to recover it. 00:20:41.852 [2024-04-19 03:34:19.325308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.852 [2024-04-19 03:34:19.325498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.325525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.325652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.325788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.325814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.325972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.326132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.326158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.326309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.326448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.326475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.326606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.326756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.326782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.326934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.327091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.327117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.327280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.327458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.327485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.327647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.327805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.327830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.327992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.328172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.328198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.328384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.328567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.328593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.328722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.328906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.328932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.329058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.329213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.329239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.329369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.329499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.329525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.329656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.329814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.329840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.329965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.330146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.330172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.330354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.330533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.330561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.330727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.330904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.330930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.331088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.331215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.331246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.331403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.331587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.331614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.331773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.331952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.331978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.332110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.332236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.332262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.332385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.332543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.332569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.332726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.332867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.332900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.333038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.333189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.333215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.333348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.333520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.333549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.333687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.333879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.333905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.334074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.334245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.334272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.334408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.334536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.334568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.334724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.334883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.334909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.853 qpair failed and we were unable to recover it. 00:20:41.853 [2024-04-19 03:34:19.335062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.853 [2024-04-19 03:34:19.335221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.335247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.335404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.335563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.335589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.335756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.335915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.335941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.336103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.336233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.336259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.336391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.336553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.336579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.336765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.336952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.336978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.337158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.337342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.337368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.337559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.337721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.337748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.337881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.338220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.338521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.338805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.338992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.339179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.339551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.339838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.339993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.340147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.340273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.340304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.340482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.340610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.340636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.340817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.340942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.340968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.341130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.341308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.341333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.341490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.341612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.341639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.341772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.341932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.341958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.342122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.342248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.342274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.342406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.342524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.342551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.342738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.342892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.342918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.343103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.343242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.343276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.343424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.343612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.343638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.343810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.343942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.343974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.344138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.344269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.344302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.344469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.344631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.344657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.854 qpair failed and we were unable to recover it. 00:20:41.854 [2024-04-19 03:34:19.344811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.854 [2024-04-19 03:34:19.344971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.344997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.345177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.345328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.345354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.345519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.345681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.345709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.345868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.346205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.346524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.346831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.346989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.347176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.347338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.347364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.347498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.347680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.347706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.347865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.348027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.348053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.348179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.348360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.348393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.348558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.348709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.348735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.348891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.349082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.349108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.349267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.349424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.349451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.349605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.349756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.349782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.349962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.350109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.350136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.350287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.350443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.350469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.350628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.350786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.350812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.350965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.351118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.351144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.351327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.351460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.351487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.351647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.351808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.351835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.351960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.352098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.352124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.352284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.352445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.352472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.352657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.352809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.352835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.855 qpair failed and we were unable to recover it. 00:20:41.855 [2024-04-19 03:34:19.353022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.353176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.855 [2024-04-19 03:34:19.353202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.353326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.353493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.353520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.353706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.353862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.353889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.354075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.354234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.354260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.354416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.354595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.354621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.354776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.354932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.354958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.355128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.355286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.355312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.355466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.355617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.355643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.355803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.355986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.356013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.356193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.356362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.356405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.356533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.356688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.356715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.356867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.357051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.357077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.357231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.357389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.357416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.357571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.357759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.357789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.357922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.358051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.358077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.358238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.358394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.358421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.358551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.358732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.358758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.358942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.359096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.359123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.359284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.359417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.359443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.359607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.359793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.359818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.359953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.360086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.360112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.360267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.360428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.856 [2024-04-19 03:34:19.360454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.856 qpair failed and we were unable to recover it. 00:20:41.856 [2024-04-19 03:34:19.360616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.360755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.360781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.360913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.361071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.361097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.361258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.361444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.361471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.361653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.361813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.361839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.361993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.362123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.362149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.362288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.362417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.362444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.362573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.362751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.362778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.362963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.363129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.363155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.363312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.363462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.363489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.363630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.363757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.363784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:41.857 [2024-04-19 03:34:19.363921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.364050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.857 [2024-04-19 03:34:19.364076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:41.857 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.364200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.364356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.364408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.364578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.364707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.364733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.364910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.365202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.365541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.365828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.365990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.366119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.366270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.366296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.366451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.366583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.366610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.366768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.366918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.366944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.367130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.367290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.367316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.367451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.367581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.367607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.367769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.367925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.367951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.368078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.368263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.368290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.368449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.368606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.368633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.368761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.368911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.368938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.369088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.369242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.369268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.369453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.369621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.369651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.369774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.369957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.369983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.370110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.370249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.370276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.370429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.370613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.370639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.370792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.370922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.370948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.371071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.371258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.371285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.371409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.371562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.371588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.133 qpair failed and we were unable to recover it. 00:20:42.133 [2024-04-19 03:34:19.371787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.133 [2024-04-19 03:34:19.371944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.371970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.372152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.372309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.372336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.372494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.372671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.372697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.372884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.373021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.373047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.373184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.373336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.373363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.373555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.373709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.373735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.373915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.374228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.374541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.374851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.374979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.375151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.375456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.375800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.375954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.376131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.376292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.376319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.376471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.376633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.376659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.376820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.376980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.377006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.377196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.377347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.377373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.377510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.377669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.377695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.377822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.377976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.378008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.378156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.378345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.378371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.378566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.378695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.378722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.378879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.379063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.379090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.134 qpair failed and we were unable to recover it. 00:20:42.134 [2024-04-19 03:34:19.379241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.379396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.134 [2024-04-19 03:34:19.379424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.379557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.379708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.379734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.379866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.379996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.380023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.380165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.380326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.380353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.380532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.380692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.380719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.380868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.380994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.381020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.381202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.381387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.381414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.381549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.381737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.381763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.381924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.382074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.382100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.382280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.382438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.382465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.382625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.382785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.382812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.382965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.383097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.383124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.383280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.383439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.383467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.383622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.383779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.383805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.383957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.384133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.384159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.384315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.384498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.384527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.384683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.384817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.384843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.384998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.385177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.385203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.385350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.385517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.385544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.385709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.385832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.385858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.386019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.386166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.386192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.386372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.386502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.386529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.386660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.386815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.386842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.387025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.387155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.387181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.387336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.387462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.387490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.387654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.387778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.387804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.387991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.388152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.135 [2024-04-19 03:34:19.388178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.135 qpair failed and we were unable to recover it. 00:20:42.135 [2024-04-19 03:34:19.388361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.388521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.388549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.388713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.388869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.388895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.389049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.389207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.389233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.389393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.389535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.389561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.389691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.389853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.389880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.390066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.390227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.390253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.390388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.390534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.390561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.390716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.390847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.390873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.391034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.391194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.391221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.391373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.391563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.391589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.391744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.391900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.391926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.392051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.392205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.392231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.392407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.392564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.392590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.392730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.392888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.392914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.393063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.393222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.393248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.393397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.393582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.393609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.393735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.393892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.393918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.394042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.394171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.394204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.394358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.394506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.394533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.394718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.394869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.394895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.395081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.395235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.395266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.395398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.395554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.395580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.395698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.395879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.395905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.396055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.396239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.396266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.396453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.396610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.396636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.396766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.396925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.396954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.397089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.397269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.136 [2024-04-19 03:34:19.397295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.136 qpair failed and we were unable to recover it. 00:20:42.136 [2024-04-19 03:34:19.397420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.397582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.397609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.397792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.397953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.397988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.398116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.398296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.398322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.398447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.398579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.398608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.398772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.398925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.398952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.399101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.399262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.399288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.399447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.399580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.399606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.399757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.399925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.399951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.400105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.400256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.400282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.400476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.400621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.400647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.400802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.400962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.400989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.401148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.401272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.401298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.401438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.401620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.401646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.401774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.401910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.401938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.402126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.402283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.402310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.402473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.402654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.402681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.402832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.402990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.403174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.403480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.403759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.403914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.404069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 329032 Killed "${NVMF_APP[@]}" "$@" 00:20:42.137 [2024-04-19 03:34:19.404249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.404276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.137 [2024-04-19 03:34:19.404435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.404598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.137 [2024-04-19 03:34:19.404624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.137 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.404744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:20:42.138 [2024-04-19 03:34:19.404896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.404924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.405083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:42.138 [2024-04-19 03:34:19.405262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.405294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.405484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:42.138 [2024-04-19 03:34:19.405616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.405649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 03:34:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:42.138 [2024-04-19 03:34:19.405817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.405958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.405985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.138 [2024-04-19 03:34:19.406122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.406281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.406308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.406467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.406587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.406614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.406796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.406949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.406975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.407108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.407243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.407269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.407426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.407585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.407611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.407761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.407918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.407944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.408075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.408224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.408251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.408407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.408592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.408619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.408801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.408960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.408994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.409121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.409306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.409333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.409513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.409674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.409701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.410042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.410069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 03:34:19 -- nvmf/common.sh@470 -- # nvmfpid=329576 00:20:42.138 [2024-04-19 03:34:19.410204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.410390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- nvmf/common.sh@471 -- # waitforlisten 329576 00:20:42.138 03:34:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:42.138 [2024-04-19 03:34:19.410429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.410588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- common/autotest_common.sh@817 -- # '[' -z 329576 ']' 00:20:42.138 [2024-04-19 03:34:19.410772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.410800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 03:34:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.138 [2024-04-19 03:34:19.410954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.138 [2024-04-19 03:34:19.411113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.138 [2024-04-19 03:34:19.411140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 03:34:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.138 [2024-04-19 03:34:19.411299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.138 [2024-04-19 03:34:19.411452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.411480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.411604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.412082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.412111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.412306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.412455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.412483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.412667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.412805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.412832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.412996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.413125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.413150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.138 qpair failed and we were unable to recover it. 00:20:42.138 [2024-04-19 03:34:19.413283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.413436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.138 [2024-04-19 03:34:19.413463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.413589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.413713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.413739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.413871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.413998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.414160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.414491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.414794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.414961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.415098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.415227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.415253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.415412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.415553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.415579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.415730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.415860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.415886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.416035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.416183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.416209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.416362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.416504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.416531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.416659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.416816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.416842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.416996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.417140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.417165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.417315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.417478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.417505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.417678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.417815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.417841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.417996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.418126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.418152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.418319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.418476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.418503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.418662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.418792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.418818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.418991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.419148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.419173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.419333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.419465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.419491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.419626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.419761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.419787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.419914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.420042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.420069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.420204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.420353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.420393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.420567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.420715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.420740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.420911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.421087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.421112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.139 qpair failed and we were unable to recover it. 00:20:42.139 [2024-04-19 03:34:19.421235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.421366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.139 [2024-04-19 03:34:19.421415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.421550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.421690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.421717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.421908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.422094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.422123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.422293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.422499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.422527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.422693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.422824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.422850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.423009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.423163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.423192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.423371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.423510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.423536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.423671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.423829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.423855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.423998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.424170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.424198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.424372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.424571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.424597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.424753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.424881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.424923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.425101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.425249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.425274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.425431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.425570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.425596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.425761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.425882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.425907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.426038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.426229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.426256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.426398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.426525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.426551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.426707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.426863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.426889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.427046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.427182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.427211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.427393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.427541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.427567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.427692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.427823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.427849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.428029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.428183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.428212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.428422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.428559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.428585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.428755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.428912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.428938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.429095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.429226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.429251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.429452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.429612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.429638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.429763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.429954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.429981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.140 [2024-04-19 03:34:19.430106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.430243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.140 [2024-04-19 03:34:19.430269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.140 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.430406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.430611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.430637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.430766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.430892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.430917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.431089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.431225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.431250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.431421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.431559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.431587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.431765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.431946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.431979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.432144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.432293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.432324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.432511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.432649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.432675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.432853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.432989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.433144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.433492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.433812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.433965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.434126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.434283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.434309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.434460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.434609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.434637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.434841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.434982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.435137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.435437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.435801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.435975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.436137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.436349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.436374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.436553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.436725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.436754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.436904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.437058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.437084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.437241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.437367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.437405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.437591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.437742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.437770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.437929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.438053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.438078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.438274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.438414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.438441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.438599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.438778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.438807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.438966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.439093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.439118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.439293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.439450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.439477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.439605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.439736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.439761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.439931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.440077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.141 [2024-04-19 03:34:19.440103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.141 qpair failed and we were unable to recover it. 00:20:42.141 [2024-04-19 03:34:19.440255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.440430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.440461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.440626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.440813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.440853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.441030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.441183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.441209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.441398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.441554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.441583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.441730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.441873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.441902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.442076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.442232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.442258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.442420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.442579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.442604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.442770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.442931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.442956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.443085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.443212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.443237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.443397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.443549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.443575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.443698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.443853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.443878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.444013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.444151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.444177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.444312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.444451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.444478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.444635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.444770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.444796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.444926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.445082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.445107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.445239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.445421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.445451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.445600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.445777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.445805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.445984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.446121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.446163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.446330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.446552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.446579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.446750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.446919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.446947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.447119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.447272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.447299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.447486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.447614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.447640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.447802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.447969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.447995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.448127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.448286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.448311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.448484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.448643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.448669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.448821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.448994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.449021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.449184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.449323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.449350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.449510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.449682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.449709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.449885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.450065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.142 [2024-04-19 03:34:19.450091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.142 qpair failed and we were unable to recover it. 00:20:42.142 [2024-04-19 03:34:19.450263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.450420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.450447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.450602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.450763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.450805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.450977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.451122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.451149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.451320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.451476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.451503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.451667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.451831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.451857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.452026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.452186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.452213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.452356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.452529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.452571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.452774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.452900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.452944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.453140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.453329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.453354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.453525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.453674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.453706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.453834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.453968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.453996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.454118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.454302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.454328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.454503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.454666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.454696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.454823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.455173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.455514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.455865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.455998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.456154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.456510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.456802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.456984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 wit[2024-04-19 03:34:19.456962] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:42.143 h addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.457035] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.143 [2024-04-19 03:34:19.457126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.457251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.457274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.457424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.457542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.457567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.457727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.457907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.457933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.458092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.458254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.458280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.458443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.458579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.458606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.458742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.458901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.458927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.459082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.459229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.459255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.459393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.459557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.459583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.143 qpair failed and we were unable to recover it. 00:20:42.143 [2024-04-19 03:34:19.459708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.459869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.143 [2024-04-19 03:34:19.459895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.460069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.460219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.460244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.460427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.460560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.460586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.460713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.460836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.460862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.461029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.461192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.461218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.461389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.461515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.461541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.461727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.461863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.461889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.462057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.462186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.462212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.462376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.462505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.462530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.462654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.462794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.462819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.462938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.463081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.463106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.463240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.463417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.463444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.463581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.463714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.463740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.463895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.464026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.464052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.464235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.464413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.464439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.464566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.464718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.464744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.464941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.465094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.465119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.465248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.465372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.465404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.465539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.465686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.465712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.465867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.466004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.466033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.466192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.466378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.466410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.466563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.466697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.466722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.466880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.467053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.467088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.467259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.467416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.467443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.467610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.467776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.144 [2024-04-19 03:34:19.467801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.144 qpair failed and we were unable to recover it. 00:20:42.144 [2024-04-19 03:34:19.467942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.468088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.468113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.468236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.468408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.468435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.468565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.468726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.468752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.468896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.469045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.469071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.469203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.469333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.469362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.469532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.469688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.469715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.469841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.470158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.470454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.470764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.470947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.471097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.471249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.471275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.471413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.471549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.471575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.471708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.471857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.471882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.472026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.472156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.472181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.472335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.472470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.472497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.472663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.472848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.472873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.473008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.473173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.473199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.473355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.473509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.473536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.473721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.473874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.473899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.474030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.474155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.474181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.474364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.474563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.474589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.474742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.474898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.474924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.475078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.475233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.475258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.475393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.475516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.475541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.475673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.475823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.475849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.475989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.476144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.476169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.476358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.476535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.476561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.476741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.476878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.476903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.477056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.477190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.477215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.145 qpair failed and we were unable to recover it. 00:20:42.145 [2024-04-19 03:34:19.477372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.145 [2024-04-19 03:34:19.477536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.477562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.477742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.477931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.477956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.478109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.478238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.478265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.478452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.478613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.478639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.478791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.478916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.478942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.479100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.479252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.479278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.479464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.479593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.479619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.479813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.479961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.479987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.480151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.480310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.480336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.480471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.480629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.480654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.480822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.480977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.481002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.481183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.481366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.481399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.481524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.481675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.481700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.481868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.482021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.482045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.482180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.482335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.482361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.482531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.482688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.482714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.482894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.483056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.483082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.483216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.483368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.483406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.483547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.483703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.483730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.483888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.484043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.484069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.484193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.484373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.484420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.484551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.484685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.484709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.484873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.485024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.485049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.485236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.485393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.485420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.485577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.485741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.485767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.485919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.486075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.486100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.486228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.486391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.486422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.486574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.486709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.486734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.486894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.487026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.146 [2024-04-19 03:34:19.487052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.146 qpair failed and we were unable to recover it. 00:20:42.146 [2024-04-19 03:34:19.487208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.487355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.487392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.487521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.487656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.487691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.487852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.488148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.488532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.488840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.488995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.489019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.489178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.489365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.489419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.489581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.489704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.489729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.489869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.490052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.490087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.490232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.490391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.490418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.490591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.490733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.490760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.490917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.491054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.491079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.491205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.491362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.491401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.491567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.147 [2024-04-19 03:34:19.491735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.491761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.491899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.492057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.492082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.492235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.492409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.492436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.492595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.492747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.492772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.492933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.493060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.493086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.493248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.493405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.493432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.493564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.493697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.493723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.493879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.494041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.494068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.494220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.494385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.494413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.494594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.494740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.494766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.494950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.495109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.495136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.495294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.495442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.495469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.495626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.495786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.495812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.495974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.496104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.496131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.496293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.496442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.496469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.147 qpair failed and we were unable to recover it. 00:20:42.147 [2024-04-19 03:34:19.496600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.496784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.147 [2024-04-19 03:34:19.496810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.496940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.497111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.497138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.497308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.497467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.497493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.497614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.497746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.497772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.497933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.498077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.498103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.498250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.498395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.498422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.498578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.498746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.498772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.498941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.499077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.499103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.499279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.499438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.499465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.499619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.499784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.499810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.499970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.500136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.500162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.500348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.500537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.500563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.500716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.500905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.500931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.501084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.501218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.501244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.501432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.501566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.501593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.501764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.501896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.501923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.502046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.502178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.502204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.502361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.502536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.502562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.502745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.502893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.502919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.503051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.503202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.503228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.503388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.503519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.503544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.503674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.503828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.503854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.504014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.504159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.504184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.504316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.504477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.504503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.504637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.504798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.504824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.504977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.505135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.505161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.505315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.505475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.505502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.505660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.505816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.505842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.505995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.506151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.506177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.506362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.506534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.148 [2024-04-19 03:34:19.506560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.148 qpair failed and we were unable to recover it. 00:20:42.148 [2024-04-19 03:34:19.506692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.506845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.506875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.507059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.507239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.507265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.507426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.507582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.507608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.507747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.507894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.507920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.508104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.508262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.508288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.508456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.508633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.508659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.508820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.508983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.509009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.509244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.509373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.509408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.509585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.509772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.509800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.509970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.510135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.510161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.510325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.510480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.510507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.510703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.510893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.510918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.511106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.511269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.511295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.511460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.511623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.511648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.511819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.511985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.512010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.512152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.512334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.512360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.512563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.512702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.512727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.512895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.513085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.513111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.513255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.513431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.513457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.513590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.513757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.513782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.513945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.514110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.514136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.514275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.514465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.514490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.514661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.514804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.514829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.514960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.515126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.515152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.149 qpair failed and we were unable to recover it. 00:20:42.149 [2024-04-19 03:34:19.515334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.149 [2024-04-19 03:34:19.515471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.515497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.518391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.518567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.518594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.518767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.518957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.518983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.519115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.519279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.519305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.519479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.519618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.519644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.519819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.520007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.520036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.520202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.520365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.520396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.520552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.520706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.520730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.520915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.521081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.521108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.521294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.521464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.521490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.521614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.521744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.521770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.521928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.522089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.522115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.522251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.522421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.522447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.522585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.522717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.522744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.522908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.523065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.523091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.523221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.523389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.523427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.523589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.523720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.523746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.523935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.524086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.524113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.524286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.524479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.524505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.524679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.524869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.524895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.525049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.526885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.150 [2024-04-19 03:34:19.527392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.527445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.527646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.527820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.527846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.527975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.528112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.528149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.528308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.528467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.528495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.528655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.528811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.528837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.529020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.529178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.529204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.529400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.529536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.529564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.529737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.529900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.529926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.530099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.530289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.150 [2024-04-19 03:34:19.530315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.150 qpair failed and we were unable to recover it. 00:20:42.150 [2024-04-19 03:34:19.530476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.530631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.530657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.530812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.530978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.531004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.531194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.531358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.531389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.531548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.531763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.531950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.532113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.532140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.532280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.532432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.532461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.532621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.532758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.532784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.532980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.536415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.536464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.536708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.536870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.536897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.537060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.537228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.537256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.537422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.537592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.537621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.537767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.537905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.537948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.538118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.538284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.538311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.538470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.538652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.538678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.538943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.539141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.539169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.539331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.539499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.539526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.539687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.539848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.539875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.540038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.540207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.540233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.540403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.540575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.540601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.540736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.540900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.540927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.541089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.541288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.541315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.541557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.541711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.541746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.541918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.542089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.542125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.542319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.542474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.542502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.542681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.542839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.542880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.543042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.543203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.543229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.543395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.543582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.543609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.543794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.543969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.543995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.544142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.544301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.544328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.151 qpair failed and we were unable to recover it. 00:20:42.151 [2024-04-19 03:34:19.544489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.151 [2024-04-19 03:34:19.544623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.544649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.544805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.544955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.544982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.545146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.545309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.545336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.545499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.545637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.545663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.545790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.545949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.545976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.546143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.546279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.546306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.546463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.546626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.546653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.546812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.546970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.546997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.547161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.547322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.547349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.547528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.547685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.547712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.547878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.548009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.548036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.548196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.548324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.548351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.548514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.548649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.548677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.548837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.549002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.549029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.549215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.549365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.549397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.549540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.549699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.549726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.549913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.550079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.550105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.550298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.550464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.550492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.550675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.550814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.550842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.551006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.551189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.551216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.551363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.551528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.551555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.551722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.551853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.551879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.552006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.552135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.552162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.552332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.552521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.552548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.552678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.552851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.552878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.553060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.553196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.553224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.553390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.553548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.553576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.553730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.553884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.553912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.554069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.554202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.554239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.152 qpair failed and we were unable to recover it. 00:20:42.152 [2024-04-19 03:34:19.554422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.152 [2024-04-19 03:34:19.554553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.554580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.554709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.554871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.554897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.555060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.555209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.555236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.555402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.555530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.555557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.555685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.555847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.555877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.556048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.556210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.556237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.556374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.556526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.556554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.556683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.556833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.556860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.557032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.557214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.557242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.557400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.557563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.557590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.557752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.557888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.557915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.558069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.558235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.558262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.558427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.558559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.558586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.558725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.558884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.558910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.559066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.559195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.559222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.559385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.559514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.559541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.559701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.559852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.559879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.560064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.560220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.560247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.560410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.560541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.560567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.560721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.560903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.560930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.561110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.561252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.561280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.561446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.561609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.561637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.561800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.561960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.561987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.562142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.562301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.562328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.153 qpair failed and we were unable to recover it. 00:20:42.153 [2024-04-19 03:34:19.562468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.153 [2024-04-19 03:34:19.562648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.562675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.562849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.563169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.563518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.563837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.563991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.564018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.564175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.564322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.564349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.564486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.564622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.564650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.564892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.565030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.565058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.565189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.565377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.565411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.565567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.565695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.565722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.565880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.566039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.566066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.566196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.566322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.566348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.566519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.566660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.566687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.566877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.567028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.567054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.567211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.567363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.567395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.567555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.567707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.567734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.567889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.568046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.568077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.568206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.568360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.568395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.568562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.568794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.568821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.568961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.569117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.569144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.569305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.569447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.569475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.569615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.569775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.569802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.569964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.570092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.570120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.570284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.570428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.570456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.570613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.570782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.570809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.570996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.571158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.571185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.571314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.571473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.571505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.571661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.571795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.571821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.571977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.572105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.572132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.154 qpair failed and we were unable to recover it. 00:20:42.154 [2024-04-19 03:34:19.572264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.154 [2024-04-19 03:34:19.572397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.572434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.572591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.572753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.572780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.572964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.573143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.573170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.573351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.573542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.573569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.573703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.573885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.573912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.574073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.574257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.574285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.574441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.574594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.574621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.574790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.574943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.574974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.575126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.575308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.575335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.575509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.575663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.575690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.575823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.575982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.576009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.576168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.576405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.576439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.576601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.576769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.576796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.576927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.577062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.577091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.577246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.577483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.577511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.577698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.577858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.577884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.578014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.578143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.578170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.578332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.578491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.578526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.578688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.578852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.578879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.579056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.579212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.579239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.579369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.579511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.579538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.579704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.579869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.579897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.580055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.580194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.580222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.580386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.580519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.580546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.580732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.580865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.580894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.581055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.581238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.581265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.581430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.581591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.581619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.581797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.581959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.581987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.582119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.582278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.582305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.155 qpair failed and we were unable to recover it. 00:20:42.155 [2024-04-19 03:34:19.582467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.155 [2024-04-19 03:34:19.582610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.582636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.582800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.582935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.582962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.583123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.583283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.583310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.583457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.583588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.583614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.583779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.583937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.583964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.584122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.584281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.584308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.584455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.584587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.584613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.584778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.584944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.584970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.585113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.585266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.585293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.585463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.585699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.585726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.585877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.586007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.586035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.586193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.586356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.586391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.586551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.586686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.586714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.586902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.587058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.587084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.587216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.587374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.587407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.587570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.587803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.587829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.587981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.588143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.588169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.588326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.588465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.588492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.588620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.588746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.588774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.588962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.589099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.589125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.589285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.589450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.589477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.589632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.589812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.589838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.589999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.590215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.590241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.590427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.590565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.590592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.590779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.590938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.590965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.591144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.591327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.591353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.591515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.591677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.591704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.591885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.592018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.592044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.592203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.592355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.156 [2024-04-19 03:34:19.592388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.156 qpair failed and we were unable to recover it. 00:20:42.156 [2024-04-19 03:34:19.592530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.592657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.592685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.592844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.593193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.593535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.593836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.593994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.594175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.594335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.594362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.594555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.594684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.594711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.594873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.595025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.595052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.595237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.595372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.595406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.595568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.595719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.595745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.595910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.596074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.596101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.596222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.596387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.596415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.596598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.596757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.596784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.596915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.597099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.597126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.597287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.597470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.597498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.597682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.597813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.597841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.598000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.598156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.598183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.598371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.598536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.598563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.598719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.598871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.598898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.599055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.599183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.599210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.599379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.599550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.599578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.599738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.599900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.599926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.600084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.600244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.600271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.600404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.600591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.600618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.600869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.601002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.601030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.601223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.601359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.601393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.601576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.601739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.601766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.601934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.602070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.602097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.602256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.602392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.157 [2024-04-19 03:34:19.602420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.157 qpair failed and we were unable to recover it. 00:20:42.157 [2024-04-19 03:34:19.602551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.602710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.602737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.602870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.603056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.603083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.603233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.603393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.603421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.603558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.603688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.603716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.603848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.604002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.604030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.604185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.604346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.604373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.604510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.604669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.604696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.604853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.605018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.605045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.605203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.605365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.605423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.605576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.605739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.605766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.605928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.606062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.606089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.606252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.606411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.606439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.606568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.606728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.606756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.606910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.607045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.607074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.607239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.607360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.607394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.607558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.607692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.607720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.607878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.608043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.608070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.608229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.608358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.608392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.608555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.608711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.608738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.608899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.609136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.609164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.158 qpair failed and we were unable to recover it. 00:20:42.158 [2024-04-19 03:34:19.609348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.609590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.158 [2024-04-19 03:34:19.609619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.609801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.609994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.610022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.610185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.610341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.610368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.610560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.610720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.610747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.610878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.611030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.611058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.611243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.611402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.611430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.611667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.611797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.611825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.612004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.612187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.612214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.612368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.612533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.612561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.612690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.612926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.612953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.613112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.613295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.613322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.613481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.613620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.613649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.613808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.613980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.614008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.614243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.614405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.614432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.614593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.614725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.614751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.614906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.615257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.615549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.615866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.615996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.616024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.616208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.616399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.616427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.616563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.616709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.616737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.616971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.617132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.617161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.617316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.617477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.617506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.617658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.617810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.617838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.618003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.618163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.618190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.618321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.618489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.618517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.618703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.618853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.618881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.619018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.619172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.619199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.619324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.619482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.619510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.159 qpair failed and we were unable to recover it. 00:20:42.159 [2024-04-19 03:34:19.619667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.619802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.159 [2024-04-19 03:34:19.619829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.619958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.620120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.620147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.620273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.620439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.620467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.620598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.620761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.620788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.620944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.621078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.621105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.621291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.621452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.621481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.621638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.621873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.621900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.622064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.622218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.622246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.622424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.622610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.622636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.622794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.622932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.622961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.623117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.623276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.623303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.623440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.623599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.623628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.623811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.623976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.624008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.624163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.624338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.624366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.624512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.624748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.624775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.624944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.625080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.625107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.625234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.625364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.625409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.625577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.625729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.625756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.625913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.626098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.626125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.626283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.626433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.626461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.626643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.626776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.626803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.626981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.627111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.627140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.627296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.627465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.627500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.627634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.627790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.627817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.627975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.628102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.628131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.628316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.628454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.628483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.628636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.628820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.628847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.629008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.629188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.629216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.629353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.629484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.629511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.160 qpair failed and we were unable to recover it. 00:20:42.160 [2024-04-19 03:34:19.629663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.629822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.160 [2024-04-19 03:34:19.629849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.629983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.630143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.630171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.630327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.630454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.630482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.630664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.630848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.630883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.631037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.631195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.631222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.631402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.631534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.631562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.631716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.631844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.631871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.632031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.632159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.632186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.632340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.632501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.632529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.632767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.632955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.632982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.633145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.633302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.633329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.633467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.633648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.633675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.633808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.633986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.634013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.634196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.634408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.634441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.634580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.634733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.634761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.634942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.635104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.635131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.635263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.635434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.635462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.635617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.635774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.635802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.635966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.636120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.636147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.636283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.636440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.636468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.636598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.636752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.636780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.636912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.637043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.637071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.637228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.637394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.637423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.637582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.637741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.637770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.637960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.638146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.638173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.638327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.638478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.638506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.638642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.638798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.638826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.638988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.639124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.639153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.639306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.639439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.639467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.161 qpair failed and we were unable to recover it. 00:20:42.161 [2024-04-19 03:34:19.639599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.639755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.161 [2024-04-19 03:34:19.639782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.639942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.640106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.640132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.640281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.640440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.640467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.640601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.640726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.640752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.640933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.641115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.641142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.641309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.641497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.641524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.641761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.641924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.641952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.642136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.642296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.642324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.642490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.642652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.642679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.642835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.643005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.643032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.643193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.643347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.643375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.643543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.643674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.643704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.643883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.644065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.644092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.644217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.644374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.644409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.644570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.644721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.644747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.644906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.645087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.645114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.645246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.645387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.645414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.645574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.645726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.645753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.645890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.646204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.646556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.646856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.646990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.647017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.647147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.647277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.647304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.647539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.647668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.647695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.647845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.648002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.648029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.162 qpair failed and we were unable to recover it. 00:20:42.162 [2024-04-19 03:34:19.648223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.648371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.162 [2024-04-19 03:34:19.648363] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.163 [2024-04-19 03:34:19.648410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b9[2024-04-19 03:34:19.648411] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events0 with addr=10.0.0.2, port=4420 00:20:42.163 at runtime. 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.648429] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.163 [2024-04-19 03:34:19.648442] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.163 [2024-04-19 03:34:19.648453] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.163 [2024-04-19 03:34:19.648572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.648540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:42.163 [2024-04-19 03:34:19.648597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:42.163 [2024-04-19 03:34:19.648567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:42.163 [2024-04-19 03:34:19.648593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:42.163 [2024-04-19 03:34:19.648749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.648774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.648942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.649100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.649124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.649254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.649405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.649431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.649563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.649696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.649722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.649882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.650019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.650045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.650203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.650437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.650464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.650599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.650833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.650863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.651048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.651205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.651231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.651391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.651587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.651614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.651833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.651990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.652016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.652175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.652331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.652358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.652527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.652690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.652716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.652880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.653036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.653063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.653215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.653391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.653419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.653553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.653711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.653738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.653917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.654146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.654172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.654357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.654508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.654540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.654682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.654839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.654864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.655017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.655151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.655177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.655366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.655561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.655588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.655726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.655962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.655988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.656113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.656265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.656291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.656445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.656601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.656628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.656754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.656889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.656915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.657067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.657220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.657247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.163 [2024-04-19 03:34:19.657415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.657575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.163 [2024-04-19 03:34:19.657602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.163 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.657767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.657893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.657919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.658077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.658293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.658319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.658482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.658638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.658664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.658883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.659215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.659531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.659825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.659973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.660189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.660371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.660404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.660562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.660725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.660753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.660885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.661102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.661129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.661289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.661446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.661474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.661634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.661813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.661839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.661989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.662145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.662172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.662307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.662451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.662478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.662643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.662810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.662836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.662995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.663146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.663173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.663316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.663448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.663476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.663617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.663755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.663782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.664036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.664189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.664228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.664392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.664552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.664591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f04b8000b90 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.664797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.664951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.664984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.665139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.665283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.665316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.665462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.665604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.665634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.665786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.665939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.665970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.666123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.666274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.666303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.666466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.666603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.666633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.666795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.666932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.666960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.667109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.667293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.667323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.164 [2024-04-19 03:34:19.667474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.667607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.164 [2024-04-19 03:34:19.667638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.164 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.667797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.667932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.667961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.668091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.668236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.668265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.668450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.668601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.668631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.668791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.668943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.668972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.669141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.669316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.669346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.669531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.669677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.669708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.669851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.670027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.670058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.670224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.670369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.670407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.670550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.670689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.670719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.670867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.671003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.671033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.671206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.671349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.671377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.671527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.671705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.671731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.671872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.672184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.672529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.672814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.672974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.673122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.673262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.673288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.673434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.673560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.673587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.673724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.673843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.673870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.674007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.674132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.674158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.674308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.674463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.165 [2024-04-19 03:34:19.674490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.165 qpair failed and we were unable to recover it. 00:20:42.165 [2024-04-19 03:34:19.674649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.674776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.674802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.674974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.675144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.675170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.675298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.675429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.675457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.675599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.675754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.675780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.675908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.676042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.676069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.676232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.676374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.676405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.676545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.676673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.676701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.676877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.677172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.677489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.436 [2024-04-19 03:34:19.677784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.436 [2024-04-19 03:34:19.677960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.436 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.678096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.678272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.678303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.678441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.678615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.678641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.678778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.679011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.679036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.679225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.679364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.679397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.679554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.679717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.679742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.679878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.680014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.680040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.680216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.680342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.680367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.680513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.680647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.680673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.680868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.681094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.681120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.681252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.681408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.681442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.681649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.681807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.681833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.681991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.682116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.682142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.682273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.682439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.682466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.682636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.682818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.682843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.682974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.683139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.683165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.683310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.683475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.683503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.683634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.683814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.683841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.683965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.684099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.684125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.684271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.684427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.684453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.684619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.684770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.684796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.684925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.685083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.685108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.685233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.685415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.685442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.685628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.685793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.685820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.685960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.686116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.686141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.686304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.686446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.686473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.686602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.686754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.686780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.686908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.687039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.687065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.687187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.687314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.687340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.437 [2024-04-19 03:34:19.687468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.687602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.437 [2024-04-19 03:34:19.687630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.437 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.687784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.687918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.687943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.688076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.688199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.688225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.688378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.688513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.688539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.688690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.688828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.688853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.688979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.689139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.689165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.689298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.689426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.689454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.689592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.689722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.689750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.689881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.690175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.690460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.690754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.690934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.691065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.691245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.691271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.691414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.691552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.691579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.691722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.691874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.691900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.692080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.692240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.692266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.692391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.692515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.692541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.692667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.692799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.692825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.692964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.693123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.693149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.693280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.693438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.693465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.693599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.693733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.693759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.693888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.694212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.694516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.694815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.694965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.695096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.695276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.695302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.695494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.695651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.695677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.695805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.695930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.695956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.696108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.696314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.696340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.696473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.696657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.438 [2024-04-19 03:34:19.696682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.438 qpair failed and we were unable to recover it. 00:20:42.438 [2024-04-19 03:34:19.696808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.696967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.696993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.697156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.697313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.697338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.697470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.697630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.697655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.697784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.697916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.697942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.698071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.698208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.698234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.698399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.698529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.698555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.698678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.698857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.698884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.699016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.699173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.699199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.699326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.699460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.699487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.699635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.699771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.699797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.699928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.700202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.700513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.700818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.700988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.701015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.701177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.701320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.701346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.701491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.701706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.701733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.701899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.702116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.702142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.702271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.702426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.702463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.702618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.702757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.702783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.702939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.703234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.703551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.703821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.703968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.704087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.704237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.704264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.704435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.704567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.704593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.704749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.704879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.704905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.705024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.705156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.705182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.705307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.705440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.705467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.705596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.705735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.439 [2024-04-19 03:34:19.705761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.439 qpair failed and we were unable to recover it. 00:20:42.439 [2024-04-19 03:34:19.705884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.706229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.706524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.706811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.706980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.707134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.707265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.707291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.707438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.707584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.707610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.707747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.707884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.707910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.708048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.708198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.708225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.708364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.708504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.708530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.708654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.708790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.708816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.708974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.709107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.709133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.709272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.709405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.709442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.709582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.709770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.709797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.709946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.710090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.710116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.710253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.710374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.710406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.710547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.710673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.710704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.710835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.711169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.711498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.711809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.711959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.712114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.712262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.712288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.712494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.712628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.712664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.712794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.712946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.712972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.713101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.713232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.713258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.713395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.713529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.713556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.713714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.713868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.713898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.714035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.714191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.714218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.714341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.714481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.714508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.714649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.714776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.714803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.440 qpair failed and we were unable to recover it. 00:20:42.440 [2024-04-19 03:34:19.714959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.440 [2024-04-19 03:34:19.715093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.715119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.715247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.715368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.715400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.715556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.715682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.715708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.715842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.715977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.716158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.716502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.716825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.716992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.717129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.717251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.717277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.717436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.717586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.717612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.717749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.717888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.717915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.718045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.718198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.718224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.718350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.718525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.718551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.718676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.718803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.718830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.718977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.719102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.719128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.719277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.719411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.719444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.719647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.719770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.719796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.719959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.720118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.720146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.720274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.720451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.720478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.720598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.720719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.720745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.720886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.721114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.721140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.721338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.721493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.721520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.721652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.721787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.721813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.441 [2024-04-19 03:34:19.721973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.722135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.441 [2024-04-19 03:34:19.722161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.441 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.722289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.722447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.722473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.722609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.722770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.722796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.722930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.723084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.723111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.723237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.723379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.723410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.723596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.723728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.723755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.723877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.724190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.724508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.724803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.724975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.725142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.725269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.725295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.725431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.725590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.725616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.725781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.725911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.725940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.726082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.726227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.726253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.726377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.726579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.726605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.726858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.727167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.727507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.727814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.727964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.728092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.728231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.728258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.728409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.728554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.728580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.728719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.728889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.728916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.729067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.729223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.729249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.729390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.729520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.729549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.729702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.729853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.729880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.730105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.730256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.730287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.730432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.730562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.730589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.730722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.730879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.730906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.731065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.731301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.731328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.731476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.731616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.442 [2024-04-19 03:34:19.731643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.442 qpair failed and we were unable to recover it. 00:20:42.442 [2024-04-19 03:34:19.731794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.731947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.731973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.732106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.732228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.732258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.732397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.732517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.732543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.732659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.732792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.732818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.732989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.733136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.733163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.733346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.733514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.733541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.733679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.733811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.733837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.733971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.734238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.734569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.734871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.734991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.735173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.735451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.735764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.735923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.736045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.736199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.736225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.736353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.736480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.736507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.736640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.736791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.736817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.736944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.737271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.737564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.737861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.737993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.738149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.738446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.738760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.738943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.739072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.739224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.739250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.739412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.739546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.739572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.739738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.739862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.739889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.740018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.740180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.740206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.740344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.740502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.740529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.443 qpair failed and we were unable to recover it. 00:20:42.443 [2024-04-19 03:34:19.740658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.443 [2024-04-19 03:34:19.740802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.740828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.740951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.741100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.741126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.741285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.741433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.741460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.741590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.741721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.741747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.741881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.742196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.742518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.742808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.742969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.743122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.743283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.743309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.743452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.743577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.743604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.743728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.743866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.743893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.744012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.744150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.744176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.744309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.744445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.744471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.744603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.744757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.744783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.744925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.745208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.745540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.745818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.745971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.746126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.746275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.746301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.746463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.746590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.746616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.746774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.746905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.746931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.747068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.747196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.747222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.747400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.747541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.747568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.747693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.747851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.747877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.748012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.748143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.748169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.748315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.748442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.748469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.748599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.748727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.748753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.748882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.749020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.749046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.749194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.749323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.749349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.444 qpair failed and we were unable to recover it. 00:20:42.444 [2024-04-19 03:34:19.749525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.749673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.444 [2024-04-19 03:34:19.749699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.749853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.750185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.750499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.750777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.750987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.751116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.751255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.751281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.751454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.751582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.751608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.751759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.751912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.751938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.752089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.752213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.752240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.752378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.752511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.752537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.752669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.752827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.752854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.752986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.753141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.753167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.753298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.753460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.753487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.753612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.753758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.753785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.753913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.754065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.754093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.754233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.754419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.754445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.754568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.754696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.754722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.754943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.755084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.755111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.755241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.755378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.755433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.755583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.755713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.755739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.755897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.756216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.756526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.756839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.756996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.757022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.757159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.757307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.757333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.445 qpair failed and we were unable to recover it. 00:20:42.445 [2024-04-19 03:34:19.757463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.445 [2024-04-19 03:34:19.757596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.757622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.757775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.757927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.757953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.758136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.758271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.758297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.758446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.758567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.758594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.758716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.758844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.758870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.759000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.759281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.759562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.759838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.759996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.760177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.760465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.760758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.760935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.761063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.761240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.761267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.761418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.761547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.761573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.761719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.761869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.761899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.762057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.762215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.762241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.762363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.762511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.762538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.762701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.762822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.762848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.762977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.763106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.763132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.763255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.763392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.763419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.763549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.763706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.763732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.763882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.764200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.764522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.764812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.764958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.765106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.765264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.765290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.765454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.765589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.765615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.765757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.765875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.765901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.766022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.766166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.766192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.446 [2024-04-19 03:34:19.766340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.766529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.446 [2024-04-19 03:34:19.766556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.446 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.766679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.766835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.766862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.766995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.767126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.767154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.767292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.767425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.767465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.767619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.767747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.767774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.767935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.768088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.768115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.768249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.768414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.768442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.768593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.768717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.768744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.768871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.769208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.769504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.769775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.769927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.770078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.770223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.770249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.770374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.770520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.770546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.770670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.770851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.770877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.770997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.771131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.771157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.771314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.771475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.771503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.771639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.771776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.771802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.771935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.772207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.772495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.772802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.772970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.773117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.773273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.773300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.773433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.773563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.773589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.773718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.773842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.773869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.774005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.774139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.774167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.774290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.774477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.774505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.774626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.774747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.774773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.774932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.775056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.775082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.775213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.775350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.447 [2024-04-19 03:34:19.775376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.447 qpair failed and we were unable to recover it. 00:20:42.447 [2024-04-19 03:34:19.775518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.775649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.775675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.775798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.775949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.775975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.776112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.776271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.776296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.776445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.776596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.776623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.776744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.776900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.776927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.777054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.777173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.777199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.777357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.777509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.777539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.777663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.777819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.777845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.777997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.778270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.778557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.778839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.778975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.779001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.779151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.779297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.779324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.779529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.779709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.779736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.779866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 03:34:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.448 [2024-04-19 03:34:19.779986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.780014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.780137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 03:34:19 -- common/autotest_common.sh@850 -- # return 0 00:20:42.448 [2024-04-19 03:34:19.780268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.780297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.780436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 03:34:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:42.448 [2024-04-19 03:34:19.780585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.780612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 03:34:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:42.448 [2024-04-19 03:34:19.780750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.780906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.780933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.448 [2024-04-19 03:34:19.781055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.781176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.781202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.781342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.781490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.781517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.781641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.781776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.781803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.781962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.782084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.782110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.782245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.782390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.782417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.782548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.782686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.782712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.782869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.783215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.783536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.783844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.783998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.448 qpair failed and we were unable to recover it. 00:20:42.448 [2024-04-19 03:34:19.784132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.448 [2024-04-19 03:34:19.784253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.784287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.784435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.784568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.784594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.784748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.784876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.784902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.785044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.785183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.785209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.785362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.785504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.785531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.785690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.785811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.785837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.785967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.786150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.786176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.786334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.786473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.786500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.786637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.786766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.786792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.786915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.787069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.787095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.787243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.787402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.787430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.787550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.787687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.787712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.787876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.788183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.788525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.788794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.788958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.789112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.789238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.789264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.789453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.789582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.789608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.789749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.789872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.789902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.790057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.790209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.790236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.790390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.790513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.790539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.790700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.790841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.790870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.791039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.791174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.791200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.791332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.791455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.791482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.791628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.791759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.791785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.791942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.792066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.792093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.792271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.792400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.792442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.792599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.792726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.792752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.792899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.793023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.793053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.449 qpair failed and we were unable to recover it. 00:20:42.449 [2024-04-19 03:34:19.793203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.793357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.449 [2024-04-19 03:34:19.793389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.793524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.793659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.793686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.793811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.793943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.793969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.794119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.794258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.794284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.794407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.794530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.794556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.794684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.794836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.794862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.795030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.795149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.795175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.795310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.795445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.795472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.795601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.795722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.795749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.795912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.796064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.796090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.796253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.796379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.796410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.796562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.796745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.796771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.796925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.797240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.797551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.797862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.797988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.798146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.798479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.798786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.798931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.799069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.799220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.799246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.799389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.799559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.799585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.799743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.799877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.799904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.800027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.800184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.800210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.800328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.800477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.800506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.800688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.800811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.800837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.450 qpair failed and we were unable to recover it. 00:20:42.450 [2024-04-19 03:34:19.800988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.801111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.450 [2024-04-19 03:34:19.801137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.801306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.801441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.801468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.801630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.801757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.801783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.801923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.802072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.802098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.802224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.802377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.802410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.802540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.802691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.802718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.802844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 03:34:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.451 [2024-04-19 03:34:19.802995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.803022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.803185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 03:34:19 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:42.451 [2024-04-19 03:34:19.803334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.803361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 03:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.451 [2024-04-19 03:34:19.803535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.451 [2024-04-19 03:34:19.803664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.803691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.803826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.804169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.804508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.804809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.804994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.805134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.805287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.805313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.805470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.805624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.805655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.805784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.805924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.805951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.806104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.806263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.806289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.806459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.806623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.806649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.806840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.807176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.807506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.807803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.807999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.808160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.808310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.808336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.808532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.808663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.808689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.808850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.808977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.809161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.809470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.809775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.809917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.451 [2024-04-19 03:34:19.810095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.810226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.451 [2024-04-19 03:34:19.810252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.451 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.810375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.810505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.810531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.810698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.810853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.810879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.811030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.811164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.811190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.811326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.811466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.811492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.811614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.811737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.811764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.811886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.812039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.812064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.812194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.812335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.812361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.812584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.812742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.812768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.812923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.813092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.813118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.813251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.813429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.813460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.813583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.813733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.813759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.813886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.814129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.814155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.814286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.814451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.814478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.814614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.814742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.814768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.814924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.815079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.815106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.815261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.815491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.815518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.815694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.815834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.815861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.816048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.816171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.816198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.816349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.816515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.816542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.816682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.816816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.816842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.816994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.817272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.817555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.817841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.817991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.818155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.818314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.818341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.818502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.818737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.818763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.818913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.819048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.819075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.819198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.819387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.819413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.819556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.819676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.452 [2024-04-19 03:34:19.819702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.452 qpair failed and we were unable to recover it. 00:20:42.452 [2024-04-19 03:34:19.819825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.819989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.820015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.820141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.820298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.820325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.820458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.820683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.820709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.820873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.821024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.821049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.821203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.821369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.821402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.821588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.821753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.821779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.821911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.822060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.822086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.822247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.822436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.822468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.822625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.822779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.822806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.823052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.823203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.823229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.823366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.823533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.823560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.823762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.823890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.823917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.824066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.824233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.824259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.824386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.824550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.824576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.824734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.824887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.824914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.825048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.825172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.825199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.825326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.825518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.825544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.825672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.825799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.825825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.825998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.826129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.826155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.826319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.826468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.826494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.826624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.826771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.826797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.826960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.827093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.827120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.827257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.827385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.827412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.827552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.827741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.827767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.827947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.828072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.828098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.828236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.828398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.828425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.828558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.828724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.828751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.828888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.829045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.829072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.829265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.829401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.829436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.453 qpair failed and we were unable to recover it. 00:20:42.453 [2024-04-19 03:34:19.829561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.453 [2024-04-19 03:34:19.829701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.829727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.829864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.829988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.830014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.830148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.830279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.830308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.830459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.830599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.830625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.830768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 Malloc0 00:20:42.454 [2024-04-19 03:34:19.830930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.830957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.831121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 03:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.454 [2024-04-19 03:34:19.831251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.831277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 03:34:19 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:42.454 [2024-04-19 03:34:19.831435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 03:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.454 [2024-04-19 03:34:19.831568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.831595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.454 [2024-04-19 03:34:19.831732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.831855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.831882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.832012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.832154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.832180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.832325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.832487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.832513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.832646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.832800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.832826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.832964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.833114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.833140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.833296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.833428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.833454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.833586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.833718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.833744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.833911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.834209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.834505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834510] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.454 [2024-04-19 03:34:19.834631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.834809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.834977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.835159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.835493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.835785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.835967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.836093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.836215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.836246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.836379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.836542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.836569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.454 qpair failed and we were unable to recover it. 00:20:42.454 [2024-04-19 03:34:19.836723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.454 [2024-04-19 03:34:19.836878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.836911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.837067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.837226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.837252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.837422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.837546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.837571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.837717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.837862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.837888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.838036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.838171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.838197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.838323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.838490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.838520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.838643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.838768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.838793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.838943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.839093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.839118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.839242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.839369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.839402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.839570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.839710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.839736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.839887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.840169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.840527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.840862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.840990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.841149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.841457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.841791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.841968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.842124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.842261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.842287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.842437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.842555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.842581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 03:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.455 [2024-04-19 03:34:19.842719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.842843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 03:34:19 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.455 [2024-04-19 03:34:19.842869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 03:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.455 [2024-04-19 03:34:19.843023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.455 [2024-04-19 03:34:19.843153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.843179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.843314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.843442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.843468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.843615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.843809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.843835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.843960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.844130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.844157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.844304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.844440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.844475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.844637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.844840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.844867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.845025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.845174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.845208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.845334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.845468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.455 [2024-04-19 03:34:19.845494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.455 qpair failed and we were unable to recover it. 00:20:42.455 [2024-04-19 03:34:19.845617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.845755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.845781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.845917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.846053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.846079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.846238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.846369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.846402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.846534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.846703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.846728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.846863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.847233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.847546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.847853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.847985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.848146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.848469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.848780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.848939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.849093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.849242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.849268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.849394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.849543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.849569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.849698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.849848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.849874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.850027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.850177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.850203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.850319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.850486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.850513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.850687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 03:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.456 [2024-04-19 03:34:19.850812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.850839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 03:34:19 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 03:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.456 [2024-04-19 03:34:19.851025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.456 [2024-04-19 03:34:19.851162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.851189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.851368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.851532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.851558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.851692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.851821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.851848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.852010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.852146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.852172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.852329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.852470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.852497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.852651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.852787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.852813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.852939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.853257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.853545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.853824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.853979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.854138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.854259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.854285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.456 qpair failed and we were unable to recover it. 00:20:42.456 [2024-04-19 03:34:19.854436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.456 [2024-04-19 03:34:19.854576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.854602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.854727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.854911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.854937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.855093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.855261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.855287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.855445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.855585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.855613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.855769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.855909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.855936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.856086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.856249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.856275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.856457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.856593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.856620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.856805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.856942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.856968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.857118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.857241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.857267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.857457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.857625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.857652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.857808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.857960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.857986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.858113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.858252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.858279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.858426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.858578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.858605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.858763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 03:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.457 03:34:19 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.457 [2024-04-19 03:34:19.858915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.858945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 03:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.457 [2024-04-19 03:34:19.859099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.457 [2024-04-19 03:34:19.859226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.859252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.859390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.859521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.859550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.859686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.859822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.859848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.859986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.860106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.860132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.860265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.860394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.860425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.860565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.860716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.860742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.860895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.861220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.861561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.861860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.861982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.862009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.862146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.862263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.862289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.862420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.862575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.862601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33f30 with addr=10.0.0.2, port=4420 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 [2024-04-19 03:34:19.862751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.457 [2024-04-19 03:34:19.862796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.457 [2024-04-19 03:34:19.865755] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:20:42.457 [2024-04-19 03:34:19.865838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33f30 (107): Transport endpoint is not connected 00:20:42.457 [2024-04-19 03:34:19.865909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.457 qpair failed and we were unable to recover it. 00:20:42.457 03:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.457 03:34:19 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:42.457 03:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.457 03:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:42.457 03:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.457 03:34:19 -- host/target_disconnect.sh@58 -- # wait 329053 00:20:42.458 [2024-04-19 03:34:19.875180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.875342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.875370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.875392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.875407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.875438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.885149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.885282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.885307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.885322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.885335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.885364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.895073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.895207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.895232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.895246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.895259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.895287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.905118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.905282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.905307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.905321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.905334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.905362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.915119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.915248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.915273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.915293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.915306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.915335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.925140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.925269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.925293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.925308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.925321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.925350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.935150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.935283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.935308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.935324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.935336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.935364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.945240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.945373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.945405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.945420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.945433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.945462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.955217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.955354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.955378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.955403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.955417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.955445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.965257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.965422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.965449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.965464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.965477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.965505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.458 [2024-04-19 03:34:19.975281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.458 [2024-04-19 03:34:19.975421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.458 [2024-04-19 03:34:19.975446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.458 [2024-04-19 03:34:19.975460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.458 [2024-04-19 03:34:19.975473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.458 [2024-04-19 03:34:19.975502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.458 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-19 03:34:19.985327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.717 [2024-04-19 03:34:19.985461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.717 [2024-04-19 03:34:19.985486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.717 [2024-04-19 03:34:19.985501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.717 [2024-04-19 03:34:19.985514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.717 [2024-04-19 03:34:19.985542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-19 03:34:19.995362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.717 [2024-04-19 03:34:19.995521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.717 [2024-04-19 03:34:19.995546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.717 [2024-04-19 03:34:19.995560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.717 [2024-04-19 03:34:19.995573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.717 [2024-04-19 03:34:19.995601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-19 03:34:20.005374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.717 [2024-04-19 03:34:20.005537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.717 [2024-04-19 03:34:20.005562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.717 [2024-04-19 03:34:20.005582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.717 [2024-04-19 03:34:20.005596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.717 [2024-04-19 03:34:20.005625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-19 03:34:20.015427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.717 [2024-04-19 03:34:20.015604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.717 [2024-04-19 03:34:20.015631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.717 [2024-04-19 03:34:20.015645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.717 [2024-04-19 03:34:20.015659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.717 [2024-04-19 03:34:20.015689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-19 03:34:20.025523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.717 [2024-04-19 03:34:20.025685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.717 [2024-04-19 03:34:20.025711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.025726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.025738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.025769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.035508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.035662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.035689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.035704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.035716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.035746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.045586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.045736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.045762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.045777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.045789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.045818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.055535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.055678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.055707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.055722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.055736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.055766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.065579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.065721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.065747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.065761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.065773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.065802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.075725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.075875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.075901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.075916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.075929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.075958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.085613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.085754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.085780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.085795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.085808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.085837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.095664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.095828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.095872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.095887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.095900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.095928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.105749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.105878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.105904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.105919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.105931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.105959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.115705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.115841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.115867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.115882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.115895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.115924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.125772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.718 [2024-04-19 03:34:20.125913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.718 [2024-04-19 03:34:20.125940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.718 [2024-04-19 03:34:20.125955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.718 [2024-04-19 03:34:20.125968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.718 [2024-04-19 03:34:20.125997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-19 03:34:20.135719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.135853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.135878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.135892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.135905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.135940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.145796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.145937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.145964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.145979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.145992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.146020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.155859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.156028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.156054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.156069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.156082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.156110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.165845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.165995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.166025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.166041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.166054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.166083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.175922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.176073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.176100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.176115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.176143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.176171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.185882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.186028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.186059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.186075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.186088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.186116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.195907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.196031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.196057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.196072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.196085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.196113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.205950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.206091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.206116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.206131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.206144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.206172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.216033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.216193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.216217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.216232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.216245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.216273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.226011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.226147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.226171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.226186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.226199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.226232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.235992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.719 [2024-04-19 03:34:20.236124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.719 [2024-04-19 03:34:20.236149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.719 [2024-04-19 03:34:20.236164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.719 [2024-04-19 03:34:20.236178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.719 [2024-04-19 03:34:20.236207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-19 03:34:20.246059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.720 [2024-04-19 03:34:20.246192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.720 [2024-04-19 03:34:20.246217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.720 [2024-04-19 03:34:20.246231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.720 [2024-04-19 03:34:20.246244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.720 [2024-04-19 03:34:20.246273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-19 03:34:20.256078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.720 [2024-04-19 03:34:20.256253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.720 [2024-04-19 03:34:20.256277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.720 [2024-04-19 03:34:20.256291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.720 [2024-04-19 03:34:20.256304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.720 [2024-04-19 03:34:20.256333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-19 03:34:20.266126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.720 [2024-04-19 03:34:20.266270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.720 [2024-04-19 03:34:20.266294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.720 [2024-04-19 03:34:20.266309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.720 [2024-04-19 03:34:20.266322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.720 [2024-04-19 03:34:20.266349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.978 [2024-04-19 03:34:20.276121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.978 [2024-04-19 03:34:20.276258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.978 [2024-04-19 03:34:20.276289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.276305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.276318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.276346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.286142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.286283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.286307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.286322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.286335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.286363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.296182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.296318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.296343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.296359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.296371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.296407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.306214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.306350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.306375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.306398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.306412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.306440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.316233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.316366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.316400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.316416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.316429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.316463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.326241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.326366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.326400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.326416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.326429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.326458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.336358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.336517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.336542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.336557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.336570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.336598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.346302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.346435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.346460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.346474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.346487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.346515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.356343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.356482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.356507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.356522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.356535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.356563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.366390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.366518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.366549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.366564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.366577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.366605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.376406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.376540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.376565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.376580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.376592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.376621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.386430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.386571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.386596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.386610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.386622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.386651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.396499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.396632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.396657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.396672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.396684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.396712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.979 qpair failed and we were unable to recover it. 00:20:42.979 [2024-04-19 03:34:20.406480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.979 [2024-04-19 03:34:20.406610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.979 [2024-04-19 03:34:20.406635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.979 [2024-04-19 03:34:20.406650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.979 [2024-04-19 03:34:20.406669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.979 [2024-04-19 03:34:20.406699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.416575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.416714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.416739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.416754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.416767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.416795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.426558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.426694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.426720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.426735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.426749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.426778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.436574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.436709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.436736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.436751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.436764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.436792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.446594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.446734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.446761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.446782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.446795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.446824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.456673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.456813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.456840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.456855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.456868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.456911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.466668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.466805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.466830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.466844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.466856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.466884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.476708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.476859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.476887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.476902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.476919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.476949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.486721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.486847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.486873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.486889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.486901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.486929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.496798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.496937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.496963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.496978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.496995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.497025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.506788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.506926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.506952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.506967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.506979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.507007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.516794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.516954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.516979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.516995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.517007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.517035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:42.980 [2024-04-19 03:34:20.526798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.980 [2024-04-19 03:34:20.526925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.980 [2024-04-19 03:34:20.526951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.980 [2024-04-19 03:34:20.526966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.980 [2024-04-19 03:34:20.526978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:42.980 [2024-04-19 03:34:20.527006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:42.980 qpair failed and we were unable to recover it. 00:20:43.239 [2024-04-19 03:34:20.536855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.239 [2024-04-19 03:34:20.537000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.239 [2024-04-19 03:34:20.537026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.239 [2024-04-19 03:34:20.537041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.239 [2024-04-19 03:34:20.537054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.239 [2024-04-19 03:34:20.537081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.239 qpair failed and we were unable to recover it. 00:20:43.239 [2024-04-19 03:34:20.546888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.239 [2024-04-19 03:34:20.547034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.239 [2024-04-19 03:34:20.547062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.239 [2024-04-19 03:34:20.547077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.239 [2024-04-19 03:34:20.547089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.239 [2024-04-19 03:34:20.547118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.239 qpair failed and we were unable to recover it. 00:20:43.239 [2024-04-19 03:34:20.556917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.239 [2024-04-19 03:34:20.557052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.239 [2024-04-19 03:34:20.557078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.239 [2024-04-19 03:34:20.557093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.239 [2024-04-19 03:34:20.557106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.557134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.566909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.567035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.567061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.567077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.567089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.567117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.576964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.577101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.577128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.577143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.577156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.577185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.587020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.587189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.587215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.587236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.587249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.587277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.597096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.597236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.597275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.597291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.597303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.597341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.607040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.607171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.607197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.607223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.607235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.607265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.617094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.617249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.617275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.617290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.617303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.617331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.627114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.627277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.627305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.627321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.627337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.627389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.637139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.637319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.637349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.637364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.637377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.637423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.647191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.647348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.647391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.647409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.647421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.647450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.657183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.657328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.657354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.657374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.657395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.657425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.667422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.667591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.667617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.667632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.667644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.667671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.677366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.677515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.677542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.677562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.677576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.677603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.687305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.240 [2024-04-19 03:34:20.687442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.240 [2024-04-19 03:34:20.687469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.240 [2024-04-19 03:34:20.687484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.240 [2024-04-19 03:34:20.687496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.240 [2024-04-19 03:34:20.687525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.240 qpair failed and we were unable to recover it. 00:20:43.240 [2024-04-19 03:34:20.697344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.697501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.697528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.697542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.697554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.697584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.707346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.707518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.707547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.707562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.707579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.707608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.717350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.717500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.717526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.717542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.717554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.717583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.727416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.727549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.727574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.727589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.727602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.727630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.737447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.737596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.737623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.737638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.737650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.737693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.747459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.747601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.747627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.747642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.747654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.747689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.757460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.757601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.757627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.757642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.757655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.757683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.767497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.767646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.767672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.767692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.767705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.767733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.777523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.777661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.777687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.777702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.777715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.777743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.241 [2024-04-19 03:34:20.787557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.241 [2024-04-19 03:34:20.787723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.241 [2024-04-19 03:34:20.787748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.241 [2024-04-19 03:34:20.787763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.241 [2024-04-19 03:34:20.787775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.241 [2024-04-19 03:34:20.787804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.241 qpair failed and we were unable to recover it. 00:20:43.500 [2024-04-19 03:34:20.797598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.500 [2024-04-19 03:34:20.797737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.500 [2024-04-19 03:34:20.797763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.500 [2024-04-19 03:34:20.797778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.500 [2024-04-19 03:34:20.797790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.500 [2024-04-19 03:34:20.797819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.500 qpair failed and we were unable to recover it. 00:20:43.500 [2024-04-19 03:34:20.807596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.500 [2024-04-19 03:34:20.807761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.500 [2024-04-19 03:34:20.807787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.500 [2024-04-19 03:34:20.807803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.500 [2024-04-19 03:34:20.807816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.500 [2024-04-19 03:34:20.807844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.500 qpair failed and we were unable to recover it. 00:20:43.500 [2024-04-19 03:34:20.817769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.500 [2024-04-19 03:34:20.817913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.500 [2024-04-19 03:34:20.817938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.500 [2024-04-19 03:34:20.817953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.500 [2024-04-19 03:34:20.817965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.500 [2024-04-19 03:34:20.817993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.500 qpair failed and we were unable to recover it. 00:20:43.500 [2024-04-19 03:34:20.827674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.500 [2024-04-19 03:34:20.827816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.500 [2024-04-19 03:34:20.827842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.500 [2024-04-19 03:34:20.827856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.827868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.827897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.837690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.837826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.837852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.837867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.837879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.837907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.847698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.847891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.847918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.847934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.847946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.847975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.857842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.857987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.858017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.858032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.858045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.858073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.867755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.867895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.867922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.867937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.867949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.867977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.877792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.877938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.877964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.877979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.877991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.878019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.887793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.887916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.887942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.887956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.887969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.887997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.897883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.898063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.898089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.898104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.898117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.898145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.907873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.908010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.908036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.908051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.908064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.908092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.917937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.918075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.918100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.918115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.918127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.918155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.927957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.928101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.928127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.928142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.928154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.928182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.937969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.938122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.938148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.501 [2024-04-19 03:34:20.938163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.501 [2024-04-19 03:34:20.938176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.501 [2024-04-19 03:34:20.938204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.501 qpair failed and we were unable to recover it. 00:20:43.501 [2024-04-19 03:34:20.947985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.501 [2024-04-19 03:34:20.948135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.501 [2024-04-19 03:34:20.948165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:20.948181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:20.948194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:20.948222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:20.958013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:20.958193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:20.958219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:20.958234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:20.958246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:20.958274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:20.968078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:20.968215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:20.968241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:20.968256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:20.968269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:20.968296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:20.978077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:20.978213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:20.978238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:20.978253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:20.978265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:20.978293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:20.988122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:20.988259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:20.988285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:20.988300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:20.988312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:20.988345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:20.998118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:20.998292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:20.998318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:20.998333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:20.998345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:20.998373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:21.008165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:21.008298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:21.008324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:21.008339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:21.008351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:21.008379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:21.018211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:21.018351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:21.018376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:21.018399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:21.018412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:21.018440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:21.028235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:21.028396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:21.028424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:21.028440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:21.028456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:21.028486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:21.038229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:21.038374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:21.038411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:21.038427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:21.038439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:21.038468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.502 [2024-04-19 03:34:21.048259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.502 [2024-04-19 03:34:21.048432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.502 [2024-04-19 03:34:21.048460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.502 [2024-04-19 03:34:21.048476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.502 [2024-04-19 03:34:21.048488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.502 [2024-04-19 03:34:21.048517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.502 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.058336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.058506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.058533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.058548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.058560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.058589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.068325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.068473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.068499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.068514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.068526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.068555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.078354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.078501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.078528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.078543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.078556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.078589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.088394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.088528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.088554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.088569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.088582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.088610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.098471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.098643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.098668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.098683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.098696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.098723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.108441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.108572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.108599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.108613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.108626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.108654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.118450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.118600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.118626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.118641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.118653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.118682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.128473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.128610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.761 [2024-04-19 03:34:21.128641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.761 [2024-04-19 03:34:21.128656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.761 [2024-04-19 03:34:21.128669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.761 [2024-04-19 03:34:21.128698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.761 qpair failed and we were unable to recover it. 00:20:43.761 [2024-04-19 03:34:21.138524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.761 [2024-04-19 03:34:21.138661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.138687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.138702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.138714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.138742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.148565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.148739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.148766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.148781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.148793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.148821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.158559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.158711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.158736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.158751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.158764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.158792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.168717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.168871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.168900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.168917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.168935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.168966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.178646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.178785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.178812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.178827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.178839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.178868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.188688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.188835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.188859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.188874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.188887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.188914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.198671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.198797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.198823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.198838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.198851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.198880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.208787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.208916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.208943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.208957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.208970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.208999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.218743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.218881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.218908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.218922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.218935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.218978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.228849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.228983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.229009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.229024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.229036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.229066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.238829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.238965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.238992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.239008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.239021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.239049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.248940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.249075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.249104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.249121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.249133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.249163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.258843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.258975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.259003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.259018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.259037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.259067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.268902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.762 [2024-04-19 03:34:21.269038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.762 [2024-04-19 03:34:21.269065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.762 [2024-04-19 03:34:21.269080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.762 [2024-04-19 03:34:21.269093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.762 [2024-04-19 03:34:21.269136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.762 qpair failed and we were unable to recover it. 00:20:43.762 [2024-04-19 03:34:21.278932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.763 [2024-04-19 03:34:21.279065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.763 [2024-04-19 03:34:21.279091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.763 [2024-04-19 03:34:21.279106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.763 [2024-04-19 03:34:21.279119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.763 [2024-04-19 03:34:21.279162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.763 qpair failed and we were unable to recover it. 00:20:43.763 [2024-04-19 03:34:21.288947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.763 [2024-04-19 03:34:21.289076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.763 [2024-04-19 03:34:21.289102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.763 [2024-04-19 03:34:21.289117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.763 [2024-04-19 03:34:21.289130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.763 [2024-04-19 03:34:21.289158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.763 qpair failed and we were unable to recover it. 00:20:43.763 [2024-04-19 03:34:21.298986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.763 [2024-04-19 03:34:21.299122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.763 [2024-04-19 03:34:21.299149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.763 [2024-04-19 03:34:21.299163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.763 [2024-04-19 03:34:21.299176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.763 [2024-04-19 03:34:21.299205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.763 qpair failed and we were unable to recover it. 00:20:43.763 [2024-04-19 03:34:21.309013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.763 [2024-04-19 03:34:21.309145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.763 [2024-04-19 03:34:21.309172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.763 [2024-04-19 03:34:21.309187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.763 [2024-04-19 03:34:21.309200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:43.763 [2024-04-19 03:34:21.309228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:43.763 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.319065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.319204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.319230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.319245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.319258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.319286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.329044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.329189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.329215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.329230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.329243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.329272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.339171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.339336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.339361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.339399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.339412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.339442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.349102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.349243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.349268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.349283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.349301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.349329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.359148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.359282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.359306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.359321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.359334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.359363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.369167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.369303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.369329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.369348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.369361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.369398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.379219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.379361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.379392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.379409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.379429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.379458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.389209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.389349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.389374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.389397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.389410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:44.022 [2024-04-19 03:34:21.389439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.022 qpair failed and we were unable to recover it. 00:20:44.022 [2024-04-19 03:34:21.399275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.022 [2024-04-19 03:34:21.399436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.022 [2024-04-19 03:34:21.399470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.022 [2024-04-19 03:34:21.399486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.022 [2024-04-19 03:34:21.399498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.399530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.409301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.409434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.409461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.409477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.409489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.409532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.419353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.419495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.419522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.419537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.419550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.419580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.429356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.429530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.429559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.429574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.429586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.429617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.439389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.439545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.439573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.439593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.439607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.439638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.449396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.449533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.449560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.449574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.449587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.449616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.459525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.459673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.459699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.459729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.459741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.459785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.469473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.469669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.469712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.469729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.469742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.469787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.479460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.479640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.479669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.479685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.479698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.479727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.489490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.489636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.489662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.489677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.489689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.489719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.499523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.499652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.499678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.499693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.499706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.499736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.509539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.509669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.509695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.509711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.509723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.509753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.519579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.519757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.519783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.519814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.519828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.519872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.529643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.023 [2024-04-19 03:34:21.529779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.023 [2024-04-19 03:34:21.529810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.023 [2024-04-19 03:34:21.529826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.023 [2024-04-19 03:34:21.529839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.023 [2024-04-19 03:34:21.529869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.023 qpair failed and we were unable to recover it. 00:20:44.023 [2024-04-19 03:34:21.539707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.024 [2024-04-19 03:34:21.539844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.024 [2024-04-19 03:34:21.539870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.024 [2024-04-19 03:34:21.539885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.024 [2024-04-19 03:34:21.539898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.024 [2024-04-19 03:34:21.539927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.024 qpair failed and we were unable to recover it. 00:20:44.024 [2024-04-19 03:34:21.549644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.024 [2024-04-19 03:34:21.549782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.024 [2024-04-19 03:34:21.549808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.024 [2024-04-19 03:34:21.549822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.024 [2024-04-19 03:34:21.549836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.024 [2024-04-19 03:34:21.549865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.024 qpair failed and we were unable to recover it. 00:20:44.024 [2024-04-19 03:34:21.559688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.024 [2024-04-19 03:34:21.559853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.024 [2024-04-19 03:34:21.559879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.024 [2024-04-19 03:34:21.559894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.024 [2024-04-19 03:34:21.559921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.024 [2024-04-19 03:34:21.559951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.024 qpair failed and we were unable to recover it. 00:20:44.024 [2024-04-19 03:34:21.569725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.024 [2024-04-19 03:34:21.569900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.024 [2024-04-19 03:34:21.569926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.024 [2024-04-19 03:34:21.569941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.024 [2024-04-19 03:34:21.569955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.024 [2024-04-19 03:34:21.569991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.024 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.579874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.580025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.580051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.580066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.580079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.580108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.589910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.590063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.590105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.590121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.590133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.590178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.599812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.599943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.599968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.599983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.599996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.600026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.609868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.609995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.610021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.610036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.610049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.610079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.619931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.620079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.620110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.620126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.620139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.620184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.629914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.630042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.630069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.630084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.630097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.630139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.639908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.640048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.640074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.640089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.640102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.640132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.649970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.650102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.650128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.650144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.650156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.650186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.659980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.660117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.660143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.660158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.660171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.660206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.669998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.670127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.670153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.670168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.670181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.670211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.283 [2024-04-19 03:34:21.680128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.283 [2024-04-19 03:34:21.680293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.283 [2024-04-19 03:34:21.680319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.283 [2024-04-19 03:34:21.680334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.283 [2024-04-19 03:34:21.680347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.283 [2024-04-19 03:34:21.680376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.283 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.690086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.690217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.690243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.690258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.690270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.690299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.700149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.700281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.700307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.700322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.700335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.700364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.710158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.710305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.710334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.710350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.710364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.710414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.720150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.720280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.720309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.720325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.720337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.720367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.730201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.730378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.730413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.730429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.730442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.730473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.740230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.740366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.740401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.740417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.740430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.740460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.750267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.750411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.750436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.750451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.750469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.750499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.760276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.760413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.760440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.760454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.760467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.760497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.770340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.770475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.770502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.770516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.770529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.770571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.780347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.780486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.780515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.780531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.780544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.780575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.790415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.790594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.790621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.790637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.790650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.790694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.800411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.800548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.800575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.800591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.800604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.800634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.810419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.810557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.284 [2024-04-19 03:34:21.810584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.284 [2024-04-19 03:34:21.810600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.284 [2024-04-19 03:34:21.810613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.284 [2024-04-19 03:34:21.810643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.284 qpair failed and we were unable to recover it. 00:20:44.284 [2024-04-19 03:34:21.820506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.284 [2024-04-19 03:34:21.820659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.285 [2024-04-19 03:34:21.820689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.285 [2024-04-19 03:34:21.820704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.285 [2024-04-19 03:34:21.820717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.285 [2024-04-19 03:34:21.820761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.285 qpair failed and we were unable to recover it. 00:20:44.285 [2024-04-19 03:34:21.830483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.285 [2024-04-19 03:34:21.830620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.285 [2024-04-19 03:34:21.830648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.285 [2024-04-19 03:34:21.830663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.285 [2024-04-19 03:34:21.830677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.285 [2024-04-19 03:34:21.830707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.285 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.840530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.840669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.840695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.840715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.840729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.840759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.850666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.850808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.850833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.850848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.850860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.850891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.860587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.860720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.860747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.860762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.860775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.860804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.870714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.870847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.870872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.870891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.870903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.870949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.880647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.880785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.880812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.880827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.880839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.880869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.890672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.890798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.890824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.890840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.890853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.890882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.900703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.900834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.900859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.544 [2024-04-19 03:34:21.900874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.544 [2024-04-19 03:34:21.900886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.544 [2024-04-19 03:34:21.900916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.544 qpair failed and we were unable to recover it. 00:20:44.544 [2024-04-19 03:34:21.910710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.544 [2024-04-19 03:34:21.910860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.544 [2024-04-19 03:34:21.910887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.910902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.910914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.910958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.920737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.920869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.920895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.920910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.920923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.920952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.930787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.930930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.930956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.930976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.930989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.931019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.940803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.940933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.940959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.940974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.940986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.941027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.950908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.951039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.951065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.951080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.951093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.951122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.960881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.961016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.961042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.961057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.961070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.961100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.970913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.971047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.971073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.971088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.971101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.971142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.980965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.981127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.981153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.981168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.981181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.981225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:21.990962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:21.991097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:21.991123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:21.991138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:21.991151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:21.991179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:22.000972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:22.001103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:22.001129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:22.001144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:22.001157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:22.001187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:22.011030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:22.011208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:22.011234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:22.011248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:22.011261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:22.011291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:22.021062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:22.021242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:22.021272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:22.021288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:22.021301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:22.021343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:22.031042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:22.031174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:22.031199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:22.031214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:22.031227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:22.031256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.545 qpair failed and we were unable to recover it. 00:20:44.545 [2024-04-19 03:34:22.041222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.545 [2024-04-19 03:34:22.041353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.545 [2024-04-19 03:34:22.041379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.545 [2024-04-19 03:34:22.041406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.545 [2024-04-19 03:34:22.041420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.545 [2024-04-19 03:34:22.041450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.546 qpair failed and we were unable to recover it. 00:20:44.546 [2024-04-19 03:34:22.051108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.546 [2024-04-19 03:34:22.051245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.546 [2024-04-19 03:34:22.051271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.546 [2024-04-19 03:34:22.051286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.546 [2024-04-19 03:34:22.051299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.546 [2024-04-19 03:34:22.051328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.546 qpair failed and we were unable to recover it. 00:20:44.546 [2024-04-19 03:34:22.061140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.546 [2024-04-19 03:34:22.061279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.546 [2024-04-19 03:34:22.061305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.546 [2024-04-19 03:34:22.061320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.546 [2024-04-19 03:34:22.061333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.546 [2024-04-19 03:34:22.061369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.546 qpair failed and we were unable to recover it. 00:20:44.546 [2024-04-19 03:34:22.071285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.546 [2024-04-19 03:34:22.071428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.546 [2024-04-19 03:34:22.071454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.546 [2024-04-19 03:34:22.071468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.546 [2024-04-19 03:34:22.071482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.546 [2024-04-19 03:34:22.071512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.546 qpair failed and we were unable to recover it. 00:20:44.546 [2024-04-19 03:34:22.081217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.546 [2024-04-19 03:34:22.081371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.546 [2024-04-19 03:34:22.081405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.546 [2024-04-19 03:34:22.081423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.546 [2024-04-19 03:34:22.081438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.546 [2024-04-19 03:34:22.081469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.546 qpair failed and we were unable to recover it. 00:20:44.546 [2024-04-19 03:34:22.091226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.546 [2024-04-19 03:34:22.091375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.546 [2024-04-19 03:34:22.091407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.546 [2024-04-19 03:34:22.091422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.546 [2024-04-19 03:34:22.091435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.546 [2024-04-19 03:34:22.091465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.546 qpair failed and we were unable to recover it. 00:20:44.546 [2024-04-19 03:34:22.101286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.805 [2024-04-19 03:34:22.101452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.805 [2024-04-19 03:34:22.101478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.805 [2024-04-19 03:34:22.101493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.805 [2024-04-19 03:34:22.101505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.805 [2024-04-19 03:34:22.101535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.805 qpair failed and we were unable to recover it. 00:20:44.805 [2024-04-19 03:34:22.111294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.805 [2024-04-19 03:34:22.111444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.805 [2024-04-19 03:34:22.111476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.805 [2024-04-19 03:34:22.111492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.805 [2024-04-19 03:34:22.111506] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.805 [2024-04-19 03:34:22.111536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.805 qpair failed and we were unable to recover it. 00:20:44.805 [2024-04-19 03:34:22.121311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.805 [2024-04-19 03:34:22.121443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.805 [2024-04-19 03:34:22.121470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.805 [2024-04-19 03:34:22.121484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.805 [2024-04-19 03:34:22.121497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.805 [2024-04-19 03:34:22.121527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.805 qpair failed and we were unable to recover it. 00:20:44.805 [2024-04-19 03:34:22.131375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.805 [2024-04-19 03:34:22.131508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.805 [2024-04-19 03:34:22.131535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.805 [2024-04-19 03:34:22.131551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.805 [2024-04-19 03:34:22.131563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.805 [2024-04-19 03:34:22.131605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.805 qpair failed and we were unable to recover it. 00:20:44.805 [2024-04-19 03:34:22.141414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.805 [2024-04-19 03:34:22.141545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.805 [2024-04-19 03:34:22.141571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.805 [2024-04-19 03:34:22.141586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.805 [2024-04-19 03:34:22.141600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.805 [2024-04-19 03:34:22.141642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.805 qpair failed and we were unable to recover it. 00:20:44.805 [2024-04-19 03:34:22.151418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.805 [2024-04-19 03:34:22.151551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.151576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.151591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.151609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.151640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.161435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.161572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.161599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.161614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.161627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.161656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.171472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.171645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.171671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.171687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.171700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.171729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.181528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.181708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.181735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.181751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.181763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.181793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.191547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.191738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.191780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.191799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.191811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.191856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.201569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.201745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.201773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.201789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.201801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.201830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.211636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.211809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.211836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.211851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.211864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.211906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.221674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.221808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.221835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.221850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.221863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.221904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.231651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.231791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.231818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.231833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.231846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.231875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.241669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.241794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.241822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.241843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.241856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.241897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.251691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.251818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.251845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.251860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.251872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.251901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.261723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.261854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.261880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.261895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.261907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.261936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.271753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.271886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.271912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.271927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.271940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.806 [2024-04-19 03:34:22.271969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.806 qpair failed and we were unable to recover it. 00:20:44.806 [2024-04-19 03:34:22.281806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.806 [2024-04-19 03:34:22.281941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.806 [2024-04-19 03:34:22.281968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.806 [2024-04-19 03:34:22.281983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.806 [2024-04-19 03:34:22.281995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.282024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.291800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.291938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.291968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.291984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.291997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.292027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.301872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.302011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.302038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.302053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.302066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.302095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.311865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.311997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.312024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.312040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.312052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.312082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.321885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.322055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.322082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.322097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.322110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.322151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.331969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.332137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.332164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.332185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.332198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.332228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.341948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.342084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.342111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.342126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.342138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.342168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.351996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.352132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.352159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.352174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.352187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.352217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:44.807 [2024-04-19 03:34:22.362016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.807 [2024-04-19 03:34:22.362155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.807 [2024-04-19 03:34:22.362182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.807 [2024-04-19 03:34:22.362197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.807 [2024-04-19 03:34:22.362210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:44.807 [2024-04-19 03:34:22.362239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:44.807 qpair failed and we were unable to recover it. 00:20:45.066 [2024-04-19 03:34:22.372116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.066 [2024-04-19 03:34:22.372264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.066 [2024-04-19 03:34:22.372291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.066 [2024-04-19 03:34:22.372307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.066 [2024-04-19 03:34:22.372320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.066 [2024-04-19 03:34:22.372349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.066 qpair failed and we were unable to recover it. 00:20:45.066 [2024-04-19 03:34:22.382119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.066 [2024-04-19 03:34:22.382262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.066 [2024-04-19 03:34:22.382290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.066 [2024-04-19 03:34:22.382305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.066 [2024-04-19 03:34:22.382318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.066 [2024-04-19 03:34:22.382363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.066 qpair failed and we were unable to recover it. 00:20:45.066 [2024-04-19 03:34:22.392130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.066 [2024-04-19 03:34:22.392302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.066 [2024-04-19 03:34:22.392330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.066 [2024-04-19 03:34:22.392363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.066 [2024-04-19 03:34:22.392376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.066 [2024-04-19 03:34:22.392431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.066 qpair failed and we were unable to recover it. 00:20:45.066 [2024-04-19 03:34:22.402108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.066 [2024-04-19 03:34:22.402250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.066 [2024-04-19 03:34:22.402279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.066 [2024-04-19 03:34:22.402294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.066 [2024-04-19 03:34:22.402307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.066 [2024-04-19 03:34:22.402336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.066 qpair failed and we were unable to recover it. 00:20:45.066 [2024-04-19 03:34:22.412168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.066 [2024-04-19 03:34:22.412302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.066 [2024-04-19 03:34:22.412332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.066 [2024-04-19 03:34:22.412347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.066 [2024-04-19 03:34:22.412360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.066 [2024-04-19 03:34:22.412417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.066 qpair failed and we were unable to recover it. 00:20:45.066 [2024-04-19 03:34:22.422179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.066 [2024-04-19 03:34:22.422315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.066 [2024-04-19 03:34:22.422347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.066 [2024-04-19 03:34:22.422363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.422395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.422428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.432224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.432390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.432419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.432434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.432448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.432489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.442271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.442422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.442450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.442465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.442477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.442507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.452237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.452378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.452413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.452428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.452440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.452471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.462286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.462452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.462479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.462495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.462508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.462543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.472304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.472474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.472499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.472513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.472526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.472555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.482345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.482521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.482549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.482564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.482577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.482606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.492413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.492585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.492611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.492626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.492639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.492669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.502510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.502657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.502683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.502698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.502711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.502740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.512437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.512609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.512643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.512663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.512676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.512717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.522431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.522569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.522596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.522611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.522623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.522653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.532466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.532603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.532629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.532645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.532657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.532691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.542597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.542741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.542768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.542783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.542796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.542825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.067 [2024-04-19 03:34:22.552522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.067 [2024-04-19 03:34:22.552716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.067 [2024-04-19 03:34:22.552743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.067 [2024-04-19 03:34:22.552759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.067 [2024-04-19 03:34:22.552777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.067 [2024-04-19 03:34:22.552807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.067 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.562563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.562710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.562737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.562752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.562764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.562810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.572574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.572709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.572736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.572751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.572763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.572793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.582706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.582847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.582873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.582888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.582900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.582944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.592647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.592786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.592813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.592827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.592840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.592869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.602697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.602846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.602874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.602889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.602902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.602942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.612818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.612978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.613005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.613020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.613033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.613063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.068 [2024-04-19 03:34:22.622743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.068 [2024-04-19 03:34:22.622898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.068 [2024-04-19 03:34:22.622926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.068 [2024-04-19 03:34:22.622941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.068 [2024-04-19 03:34:22.622954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.068 [2024-04-19 03:34:22.622999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.068 qpair failed and we were unable to recover it. 00:20:45.327 [2024-04-19 03:34:22.632763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.327 [2024-04-19 03:34:22.632901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.327 [2024-04-19 03:34:22.632928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.327 [2024-04-19 03:34:22.632943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.327 [2024-04-19 03:34:22.632955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.327 [2024-04-19 03:34:22.632985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.327 qpair failed and we were unable to recover it. 00:20:45.327 [2024-04-19 03:34:22.642825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.327 [2024-04-19 03:34:22.643006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.327 [2024-04-19 03:34:22.643048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.327 [2024-04-19 03:34:22.643063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.327 [2024-04-19 03:34:22.643082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.327 [2024-04-19 03:34:22.643126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.327 qpair failed and we were unable to recover it. 00:20:45.327 [2024-04-19 03:34:22.652898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.327 [2024-04-19 03:34:22.653047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.327 [2024-04-19 03:34:22.653074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.327 [2024-04-19 03:34:22.653089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.327 [2024-04-19 03:34:22.653102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.327 [2024-04-19 03:34:22.653131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.327 qpair failed and we were unable to recover it. 00:20:45.327 [2024-04-19 03:34:22.662886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.327 [2024-04-19 03:34:22.663029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.327 [2024-04-19 03:34:22.663056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.327 [2024-04-19 03:34:22.663071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.327 [2024-04-19 03:34:22.663083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.327 [2024-04-19 03:34:22.663128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.327 qpair failed and we were unable to recover it. 00:20:45.327 [2024-04-19 03:34:22.672922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.327 [2024-04-19 03:34:22.673063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.673089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.673104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.673117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.673158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.682973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.683120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.683147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.683161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.683174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.683203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.693016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.693152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.693179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.693194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.693207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.693236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.703043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.703222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.703248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.703264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.703276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.703305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.713095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.713237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.713264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.713280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.713293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.713331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.723021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.723165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.723193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.723208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.723221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.723251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.733033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.733178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.733205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.733227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.733240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.733269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.743078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.743257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.743283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.743298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.743311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.743340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.753087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.753226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.753252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.753266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.753279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.753308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.763111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.763249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.763275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.763291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.763303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.763332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.773155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.773295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.773321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.773337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.773349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.773378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.328 [2024-04-19 03:34:22.783243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.328 [2024-04-19 03:34:22.783438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.328 [2024-04-19 03:34:22.783465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.328 [2024-04-19 03:34:22.783481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.328 [2024-04-19 03:34:22.783494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.328 [2024-04-19 03:34:22.783536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.328 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.793241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.793394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.793421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.793437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.793450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.793491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.803235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.803376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.803410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.803426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.803438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.803467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.813255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.813390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.813417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.813432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.813445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.813475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.823328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.823500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.823533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.823549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.823562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.823591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.833371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.833544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.833571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.833587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.833600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.833629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.843388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.843527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.843554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.843569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.843581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.843610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.853377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.853517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.853544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.853560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.853572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.853601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.863425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.863568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.863594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.863610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.863622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.863658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.873433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.873613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.873641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.873656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.873675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.873705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.329 [2024-04-19 03:34:22.883588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.329 [2024-04-19 03:34:22.883734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.329 [2024-04-19 03:34:22.883761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.329 [2024-04-19 03:34:22.883776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.329 [2024-04-19 03:34:22.883789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.329 [2024-04-19 03:34:22.883834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.329 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.893469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.893610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.893637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.893652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.893674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.893704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.903533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.903696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.903723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.903738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.903750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.903794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.913542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.913688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.913720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.913736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.913749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.913778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.923589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.923729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.923756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.923771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.923784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.923826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.933578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.933712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.933739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.933755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.933768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.933797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.943646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.943794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.943821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.943836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.943849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.943895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.953709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.953855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.953882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.953898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.953916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.953962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.963659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.963790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.963816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.963831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.963844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.963872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.973709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.973845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.591 [2024-04-19 03:34:22.973871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.591 [2024-04-19 03:34:22.973886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.591 [2024-04-19 03:34:22.973898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.591 [2024-04-19 03:34:22.973927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.591 qpair failed and we were unable to recover it. 00:20:45.591 [2024-04-19 03:34:22.983760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.591 [2024-04-19 03:34:22.983897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:22.983922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:22.983937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:22.983949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:22.983978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:22.993756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:22.993887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:22.993912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:22.993928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:22.993940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:22.993969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.003774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.003908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.003933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.003948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.003960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.003990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.013853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.013981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.014007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.014022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.014035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.014065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.023966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.024100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.024139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.024154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.024166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.024196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.033980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.034116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.034142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.034158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.034170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.034214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.043901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.044033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.044058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.044072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.044091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.044121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.053986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.054123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.054150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.054169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.054183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.054213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.063956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.064135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.064161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.064176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.064189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.064219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.074006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.074137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.074166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.074181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.074194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.074224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.084030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.084168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.084194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.084209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.084222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.084251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.094057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.094196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.094222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.094238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.094251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.094280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.104081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.104216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.104242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.104257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.104270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.104299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.592 qpair failed and we were unable to recover it. 00:20:45.592 [2024-04-19 03:34:23.114109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.592 [2024-04-19 03:34:23.114241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.592 [2024-04-19 03:34:23.114266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.592 [2024-04-19 03:34:23.114282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.592 [2024-04-19 03:34:23.114295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.592 [2024-04-19 03:34:23.114324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.593 qpair failed and we were unable to recover it. 00:20:45.593 [2024-04-19 03:34:23.124151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.593 [2024-04-19 03:34:23.124334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.593 [2024-04-19 03:34:23.124362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.593 [2024-04-19 03:34:23.124388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.593 [2024-04-19 03:34:23.124407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.593 [2024-04-19 03:34:23.124439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.593 qpair failed and we were unable to recover it. 00:20:45.593 [2024-04-19 03:34:23.134205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.593 [2024-04-19 03:34:23.134334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.593 [2024-04-19 03:34:23.134360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.593 [2024-04-19 03:34:23.134387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.593 [2024-04-19 03:34:23.134403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.593 [2024-04-19 03:34:23.134434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.593 qpair failed and we were unable to recover it. 00:20:45.593 [2024-04-19 03:34:23.144232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.593 [2024-04-19 03:34:23.144370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.593 [2024-04-19 03:34:23.144404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.593 [2024-04-19 03:34:23.144423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.593 [2024-04-19 03:34:23.144435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.593 [2024-04-19 03:34:23.144466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.593 qpair failed and we were unable to recover it. 00:20:45.853 [2024-04-19 03:34:23.154243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.853 [2024-04-19 03:34:23.154407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.853 [2024-04-19 03:34:23.154435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.853 [2024-04-19 03:34:23.154449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.853 [2024-04-19 03:34:23.154462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.853 [2024-04-19 03:34:23.154492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.853 qpair failed and we were unable to recover it. 00:20:45.853 [2024-04-19 03:34:23.164256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.853 [2024-04-19 03:34:23.164410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.853 [2024-04-19 03:34:23.164436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.853 [2024-04-19 03:34:23.164451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.853 [2024-04-19 03:34:23.164464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.853 [2024-04-19 03:34:23.164493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.853 qpair failed and we were unable to recover it. 00:20:45.853 [2024-04-19 03:34:23.174279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.853 [2024-04-19 03:34:23.174413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.853 [2024-04-19 03:34:23.174440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.853 [2024-04-19 03:34:23.174455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.853 [2024-04-19 03:34:23.174469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.853 [2024-04-19 03:34:23.174498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.853 qpair failed and we were unable to recover it. 00:20:45.853 [2024-04-19 03:34:23.184418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.853 [2024-04-19 03:34:23.184566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.853 [2024-04-19 03:34:23.184591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.853 [2024-04-19 03:34:23.184606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.853 [2024-04-19 03:34:23.184619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.853 [2024-04-19 03:34:23.184649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.853 qpair failed and we were unable to recover it. 00:20:45.853 [2024-04-19 03:34:23.194327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.853 [2024-04-19 03:34:23.194459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.853 [2024-04-19 03:34:23.194485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.853 [2024-04-19 03:34:23.194500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.853 [2024-04-19 03:34:23.194513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.194542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.204437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.204569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.204594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.204609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.204622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.204652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.214407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.214557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.214583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.214597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.214610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.214639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.224452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.224601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.224635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.224652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.224665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.224695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.234484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.234633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.234659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.234674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.234686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.234742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.244474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.244601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.244626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.244640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.244653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.244683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.254515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.254647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.254673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.254688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.254701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.254731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.264580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.264753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.264779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.264808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.264821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.264872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.274614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.274747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.274772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.274787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.274800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.274829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.284596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.284724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.284750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.284764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.284777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.284806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.294655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.294791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.294816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.294831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.294844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.294873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.304665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.304805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.304829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.304844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.304857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.304886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.314717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.314848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.314879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.314895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.314908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.314950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.324839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.854 [2024-04-19 03:34:23.325001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.854 [2024-04-19 03:34:23.325042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.854 [2024-04-19 03:34:23.325057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.854 [2024-04-19 03:34:23.325069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.854 [2024-04-19 03:34:23.325099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.854 qpair failed and we were unable to recover it. 00:20:45.854 [2024-04-19 03:34:23.334734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.334868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.334893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.334909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.334922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.334951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.344793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.344928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.344953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.344969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.344981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.345011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.354782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.354920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.354945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.354960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.354972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.355007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.364864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.365008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.365034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.365049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.365065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.365109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.374832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.374966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.374992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.375007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.375020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.375049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.384884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.385022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.385048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.385063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.385076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.385105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.394924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.395062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.395088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.395103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.395116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.395160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:45.855 [2024-04-19 03:34:23.404972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.855 [2024-04-19 03:34:23.405149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.855 [2024-04-19 03:34:23.405175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.855 [2024-04-19 03:34:23.405189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.855 [2024-04-19 03:34:23.405218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:45.855 [2024-04-19 03:34:23.405248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:45.855 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.415013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.415156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.415182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.415197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.415210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.415239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.425006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.425142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.425167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.425182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.425195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.425225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.435034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.435184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.435211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.435226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.435239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.435280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.445183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.445325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.445350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.445365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.445391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.445424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.455086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.455249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.455275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.455290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.455303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.455333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.465136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.465307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.465334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.465349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.465361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.465398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.475143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.475278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.475304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.114 [2024-04-19 03:34:23.475320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.114 [2024-04-19 03:34:23.475334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.114 [2024-04-19 03:34:23.475363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.114 qpair failed and we were unable to recover it. 00:20:46.114 [2024-04-19 03:34:23.485152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.114 [2024-04-19 03:34:23.485283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.114 [2024-04-19 03:34:23.485310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.485327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.485339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.485369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.495166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.495300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.495327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.495342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.495355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.495391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.505249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.505390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.505417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.505433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.505446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.505476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.515267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.515448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.515476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.515495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.515509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.515540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.525264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.525401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.525428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.525444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.525456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.525487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.535311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.535449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.535475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.535495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.535509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.535539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.545333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.545479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.545510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.545526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.545538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.545569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.555371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.555507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.555534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.555550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.555562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.555592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.565399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.565539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.565566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.565581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.565594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.565624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.575425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.575594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.575622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.575637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.575650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.575680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.585466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.585599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.585627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.585642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.585655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.585697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.595463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.595596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.595623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.595639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.595652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.595681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.605564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.605700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.605726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.605741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.605754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.115 [2024-04-19 03:34:23.605798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.115 qpair failed and we were unable to recover it. 00:20:46.115 [2024-04-19 03:34:23.615526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.115 [2024-04-19 03:34:23.615657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.115 [2024-04-19 03:34:23.615695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.115 [2024-04-19 03:34:23.615711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.115 [2024-04-19 03:34:23.615724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.116 [2024-04-19 03:34:23.615754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.116 qpair failed and we were unable to recover it. 00:20:46.116 [2024-04-19 03:34:23.625609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.116 [2024-04-19 03:34:23.625758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.116 [2024-04-19 03:34:23.625784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.116 [2024-04-19 03:34:23.625804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.116 [2024-04-19 03:34:23.625817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.116 [2024-04-19 03:34:23.625862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.116 qpair failed and we were unable to recover it. 00:20:46.116 [2024-04-19 03:34:23.635627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.116 [2024-04-19 03:34:23.635787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.116 [2024-04-19 03:34:23.635813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.116 [2024-04-19 03:34:23.635827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.116 [2024-04-19 03:34:23.635840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.116 [2024-04-19 03:34:23.635870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.116 qpair failed and we were unable to recover it. 00:20:46.116 [2024-04-19 03:34:23.645635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.116 [2024-04-19 03:34:23.645763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.116 [2024-04-19 03:34:23.645789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.116 [2024-04-19 03:34:23.645804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.116 [2024-04-19 03:34:23.645817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.116 [2024-04-19 03:34:23.645846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.116 qpair failed and we were unable to recover it. 00:20:46.116 [2024-04-19 03:34:23.655679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.116 [2024-04-19 03:34:23.655831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.116 [2024-04-19 03:34:23.655862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.116 [2024-04-19 03:34:23.655879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.116 [2024-04-19 03:34:23.655892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:46.116 [2024-04-19 03:34:23.655925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:46.116 qpair failed and we were unable to recover it. 00:20:46.116 [2024-04-19 03:34:23.665706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.116 [2024-04-19 03:34:23.665836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.116 [2024-04-19 03:34:23.665868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.116 [2024-04-19 03:34:23.665884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.116 [2024-04-19 03:34:23.665897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.116 [2024-04-19 03:34:23.665928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.116 qpair failed and we were unable to recover it. 00:20:46.375 [2024-04-19 03:34:23.675770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.375 [2024-04-19 03:34:23.675923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.375 [2024-04-19 03:34:23.675951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.375 [2024-04-19 03:34:23.675966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.375 [2024-04-19 03:34:23.675980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.375 [2024-04-19 03:34:23.676023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.375 qpair failed and we were unable to recover it. 00:20:46.375 [2024-04-19 03:34:23.685764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.375 [2024-04-19 03:34:23.685920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.375 [2024-04-19 03:34:23.685950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.375 [2024-04-19 03:34:23.685965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.375 [2024-04-19 03:34:23.685978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.375 [2024-04-19 03:34:23.686010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.375 qpair failed and we were unable to recover it. 00:20:46.375 [2024-04-19 03:34:23.695767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.375 [2024-04-19 03:34:23.695910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.375 [2024-04-19 03:34:23.695937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.375 [2024-04-19 03:34:23.695951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.375 [2024-04-19 03:34:23.695964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.375 [2024-04-19 03:34:23.695994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.375 qpair failed and we were unable to recover it. 00:20:46.375 [2024-04-19 03:34:23.705805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.375 [2024-04-19 03:34:23.705941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.375 [2024-04-19 03:34:23.705967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.375 [2024-04-19 03:34:23.705982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.375 [2024-04-19 03:34:23.705994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.375 [2024-04-19 03:34:23.706024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.375 qpair failed and we were unable to recover it. 00:20:46.375 [2024-04-19 03:34:23.715817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.375 [2024-04-19 03:34:23.715947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.375 [2024-04-19 03:34:23.715978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.375 [2024-04-19 03:34:23.715994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.375 [2024-04-19 03:34:23.716006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.375 [2024-04-19 03:34:23.716036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.375 qpair failed and we were unable to recover it. 00:20:46.375 [2024-04-19 03:34:23.725900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.375 [2024-04-19 03:34:23.726039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.375 [2024-04-19 03:34:23.726066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.375 [2024-04-19 03:34:23.726082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.375 [2024-04-19 03:34:23.726096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.726126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.735886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.736017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.736042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.736057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.736069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.736099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.745975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.746121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.746147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.746162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.746174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.746203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.755947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.756080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.756106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.756121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.756134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.756181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.765967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.766130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.766156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.766171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.766183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.766213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.775985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.776135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.776161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.776175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.776187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.776216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.786000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.786140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.786166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.786181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.786193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.786223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.796048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.796183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.796210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.796229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.796242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.796273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.806091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.806225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.806257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.806273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.806286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.806331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.816067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.816201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.816227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.816243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.816255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.816284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.826113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.826246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.826271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.826286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.826299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.826328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.836220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.836378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.836410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.836441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.836454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.836484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.846183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.846357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.846390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.846407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.846426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.846456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.856228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.856358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.376 [2024-04-19 03:34:23.856390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.376 [2024-04-19 03:34:23.856407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.376 [2024-04-19 03:34:23.856420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.376 [2024-04-19 03:34:23.856450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.376 qpair failed and we were unable to recover it. 00:20:46.376 [2024-04-19 03:34:23.866280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.376 [2024-04-19 03:34:23.866421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.866448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.866463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.866476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.866506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.377 [2024-04-19 03:34:23.876233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.377 [2024-04-19 03:34:23.876379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.876410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.876425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.876439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.876468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.377 [2024-04-19 03:34:23.886256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.377 [2024-04-19 03:34:23.886388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.886414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.886429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.886442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.886471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.377 [2024-04-19 03:34:23.896335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.377 [2024-04-19 03:34:23.896491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.896518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.896533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.896546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.896576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.377 [2024-04-19 03:34:23.906331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.377 [2024-04-19 03:34:23.906473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.906499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.906514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.906527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.906556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.377 [2024-04-19 03:34:23.916354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.377 [2024-04-19 03:34:23.916499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.916525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.916541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.916553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.916583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.377 [2024-04-19 03:34:23.926377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.377 [2024-04-19 03:34:23.926511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.377 [2024-04-19 03:34:23.926536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.377 [2024-04-19 03:34:23.926552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.377 [2024-04-19 03:34:23.926565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.377 [2024-04-19 03:34:23.926594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.377 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.936426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.936597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.936623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.936643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.936657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.936688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.946472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.946604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.946632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.946647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.946659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.946689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.956516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.956653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.956681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.956696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.956709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.956754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.966512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.966690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.966716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.966732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.966745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.966775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.976577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.976708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.976736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.976751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.976763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.976792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.986581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.986743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.986770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.986800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.986812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.986841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:23.996637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:23.996812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:23.996839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:23.996868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:23.996880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:23.996909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.006621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.006758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.006785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.006801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.006813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.006843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.016671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.016850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.016876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.016892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.016904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.016933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.026677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.026811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.026837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.026858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.026873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.026902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.036726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.036876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.036903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.036919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.036932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.036976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.046853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.047011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.047037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.047052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.047064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.047107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.056754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.056890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.056916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.056931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.056944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.056973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.066795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.066933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.066959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.066974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.066987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.067016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.076855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.076996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.077033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.077049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.077061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.077090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.086859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.086991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.087018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.087034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.087047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.087076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.096890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.097020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.097046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.097062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.097074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.097103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.106937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.107083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.107109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.107125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.107138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.107168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.116937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.117081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.117113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.117129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.117141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.117185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.126959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.127141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.127167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.127182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.127195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.127224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.137006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.137139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.137165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.137180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.137193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.137222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.147027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.147167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.147194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.147209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.147222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.147251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.635 [2024-04-19 03:34:24.157053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.635 [2024-04-19 03:34:24.157228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.635 [2024-04-19 03:34:24.157254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.635 [2024-04-19 03:34:24.157269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.635 [2024-04-19 03:34:24.157282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.635 [2024-04-19 03:34:24.157317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.635 qpair failed and we were unable to recover it. 00:20:46.636 [2024-04-19 03:34:24.167073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.636 [2024-04-19 03:34:24.167248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.636 [2024-04-19 03:34:24.167275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.636 [2024-04-19 03:34:24.167290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.636 [2024-04-19 03:34:24.167317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.636 [2024-04-19 03:34:24.167346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.636 qpair failed and we were unable to recover it. 00:20:46.636 [2024-04-19 03:34:24.177118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.636 [2024-04-19 03:34:24.177296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.636 [2024-04-19 03:34:24.177323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.636 [2024-04-19 03:34:24.177338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.636 [2024-04-19 03:34:24.177350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.636 [2024-04-19 03:34:24.177379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.636 qpair failed and we were unable to recover it. 00:20:46.636 [2024-04-19 03:34:24.187138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.636 [2024-04-19 03:34:24.187276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.636 [2024-04-19 03:34:24.187305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.636 [2024-04-19 03:34:24.187321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.636 [2024-04-19 03:34:24.187334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.636 [2024-04-19 03:34:24.187365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.636 qpair failed and we were unable to recover it. 00:20:46.894 [2024-04-19 03:34:24.197157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.894 [2024-04-19 03:34:24.197298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.894 [2024-04-19 03:34:24.197326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.894 [2024-04-19 03:34:24.197341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.894 [2024-04-19 03:34:24.197354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.894 [2024-04-19 03:34:24.197405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.894 qpair failed and we were unable to recover it. 00:20:46.894 [2024-04-19 03:34:24.207203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.894 [2024-04-19 03:34:24.207344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.894 [2024-04-19 03:34:24.207377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.894 [2024-04-19 03:34:24.207402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.894 [2024-04-19 03:34:24.207415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.894 [2024-04-19 03:34:24.207446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.894 qpair failed and we were unable to recover it. 00:20:46.894 [2024-04-19 03:34:24.217224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.894 [2024-04-19 03:34:24.217366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.894 [2024-04-19 03:34:24.217401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.894 [2024-04-19 03:34:24.217418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.894 [2024-04-19 03:34:24.217431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.894 [2024-04-19 03:34:24.217460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.894 qpair failed and we were unable to recover it. 00:20:46.894 [2024-04-19 03:34:24.227247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.894 [2024-04-19 03:34:24.227396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.894 [2024-04-19 03:34:24.227424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.894 [2024-04-19 03:34:24.227439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.894 [2024-04-19 03:34:24.227452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.894 [2024-04-19 03:34:24.227481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.237296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.237472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.237498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.237514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.237526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.237555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.247291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.247460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.247488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.247503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.247521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.247552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.257341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.257489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.257517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.257532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.257544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.257574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.267411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.267587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.267614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.267629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.267642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.267686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.277411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.277594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.277620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.277636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.277648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.277678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.287419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.287548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.287574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.287589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.287602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.287632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.297554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.297698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.297724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.297739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.297752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.297781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.307483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.307622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.307648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.307663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.307675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.307705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.317545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.317713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.317740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.317755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.317783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.317812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.327515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.327646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.327673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.327687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.327700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.327729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.337642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.337792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.337819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.337833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.337851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.337881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.347610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.347747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.347773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.347788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.347800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.347830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.357652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.895 [2024-04-19 03:34:24.357813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.895 [2024-04-19 03:34:24.357839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.895 [2024-04-19 03:34:24.357854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.895 [2024-04-19 03:34:24.357867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.895 [2024-04-19 03:34:24.357896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.895 qpair failed and we were unable to recover it. 00:20:46.895 [2024-04-19 03:34:24.367662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.367809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.367835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.367850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.367863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.367907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.377666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.377807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.377834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.377849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.377862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.377891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.387788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.387931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.387957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.387972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.387984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.388014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.397706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.397840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.397867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.397882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.397894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.397923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.407821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.408010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.408035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.408050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.408062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.408105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.417765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.417916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.417942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.417957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.417969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.417999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.427846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.427981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.428007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.428028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.428042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.428072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.437863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.438008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.438037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.438053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.438066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.438096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:46.896 [2024-04-19 03:34:24.447930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.896 [2024-04-19 03:34:24.448108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.896 [2024-04-19 03:34:24.448135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.896 [2024-04-19 03:34:24.448150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.896 [2024-04-19 03:34:24.448162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:46.896 [2024-04-19 03:34:24.448192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.896 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.457942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.458078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.458104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.458119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.458132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.458162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.467955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.468120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.468146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.468161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.468174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.468203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.478000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.478139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.478164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.478179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.478192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.478221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.487995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.488133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.488159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.488174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.488186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.488231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.498051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.498189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.498216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.498232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.498244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.498273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.508047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.508217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.508245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.508279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.508293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.508322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.518067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.518218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.518250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.155 [2024-04-19 03:34:24.518266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.155 [2024-04-19 03:34:24.518279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.155 [2024-04-19 03:34:24.518324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.155 qpair failed and we were unable to recover it. 00:20:47.155 [2024-04-19 03:34:24.528082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.155 [2024-04-19 03:34:24.528239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.155 [2024-04-19 03:34:24.528266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.528281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.528293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.528322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.538102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.538246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.538273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.538288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.538300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.538330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.548173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.548316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.548342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.548357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.548370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.548406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.558175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.558317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.558343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.558358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.558371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.558415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.568191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.568328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.568354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.568369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.568388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.568420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.578230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.578366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.578399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.578415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.578428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.578457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.588298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.588460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.588487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.588502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.588514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.588544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.598269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.598444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.598471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.598487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.598499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.598528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.608342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.608517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.608551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.608571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.608584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.608615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.618390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.618522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.618549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.618565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.618579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.618610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.628374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.628534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.628564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.628580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.628593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.628622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.638396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.638533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.638560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.638575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.638588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.638617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.648529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.648704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.648732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.648766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.648785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.156 [2024-04-19 03:34:24.648816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.156 qpair failed and we were unable to recover it. 00:20:47.156 [2024-04-19 03:34:24.658467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.156 [2024-04-19 03:34:24.658611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.156 [2024-04-19 03:34:24.658638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.156 [2024-04-19 03:34:24.658653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.156 [2024-04-19 03:34:24.658665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.157 [2024-04-19 03:34:24.658695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.157 qpair failed and we were unable to recover it. 00:20:47.157 [2024-04-19 03:34:24.668517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.157 [2024-04-19 03:34:24.668649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.157 [2024-04-19 03:34:24.668676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.157 [2024-04-19 03:34:24.668696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.157 [2024-04-19 03:34:24.668710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.157 [2024-04-19 03:34:24.668740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.157 qpair failed and we were unable to recover it. 00:20:47.157 [2024-04-19 03:34:24.678512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.157 [2024-04-19 03:34:24.678654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.157 [2024-04-19 03:34:24.678689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.157 [2024-04-19 03:34:24.678705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.157 [2024-04-19 03:34:24.678718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.157 [2024-04-19 03:34:24.678759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.157 qpair failed and we were unable to recover it. 00:20:47.157 [2024-04-19 03:34:24.688565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.157 [2024-04-19 03:34:24.688702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.157 [2024-04-19 03:34:24.688733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.157 [2024-04-19 03:34:24.688750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.157 [2024-04-19 03:34:24.688762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.157 [2024-04-19 03:34:24.688809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.157 qpair failed and we were unable to recover it. 00:20:47.157 [2024-04-19 03:34:24.698579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.157 [2024-04-19 03:34:24.698714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.157 [2024-04-19 03:34:24.698742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.157 [2024-04-19 03:34:24.698757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.157 [2024-04-19 03:34:24.698769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.157 [2024-04-19 03:34:24.698800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.157 qpair failed and we were unable to recover it. 00:20:47.157 [2024-04-19 03:34:24.708590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.157 [2024-04-19 03:34:24.708725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.157 [2024-04-19 03:34:24.708753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.157 [2024-04-19 03:34:24.708768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.157 [2024-04-19 03:34:24.708781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.157 [2024-04-19 03:34:24.708810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.157 qpair failed and we were unable to recover it. 00:20:47.416 [2024-04-19 03:34:24.718627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.416 [2024-04-19 03:34:24.718765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.416 [2024-04-19 03:34:24.718792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.416 [2024-04-19 03:34:24.718807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.416 [2024-04-19 03:34:24.718820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.416 [2024-04-19 03:34:24.718849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.416 qpair failed and we were unable to recover it. 00:20:47.416 [2024-04-19 03:34:24.728674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.416 [2024-04-19 03:34:24.728812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.416 [2024-04-19 03:34:24.728838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.416 [2024-04-19 03:34:24.728852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.416 [2024-04-19 03:34:24.728865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.416 [2024-04-19 03:34:24.728920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.416 qpair failed and we were unable to recover it. 00:20:47.416 [2024-04-19 03:34:24.738741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.416 [2024-04-19 03:34:24.738921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.416 [2024-04-19 03:34:24.738948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.738963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.738982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.739013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.748753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.748924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.748951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.748966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.748978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.749009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.758734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.758893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.758919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.758934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.758946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.758976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.768776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.768912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.768937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.768952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.768965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.768995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.778851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.779006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.779032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.779047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.779060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.779089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.788832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.788965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.788993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.789009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.789022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.789051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.798842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.798980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.799006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.799022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.799034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.799064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.808896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.809033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.809059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.809074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.809088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.809128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.818906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.819047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.819072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.819087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.819099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.819128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.828974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.829153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.829178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.829198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.829212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.829241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.838988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.839123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.839148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.839163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.839175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.839204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.849016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.849189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.849214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.849228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.849256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.849286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.859055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.859201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.859228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.859242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.859255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.417 [2024-04-19 03:34:24.859284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.417 qpair failed and we were unable to recover it. 00:20:47.417 [2024-04-19 03:34:24.869088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.417 [2024-04-19 03:34:24.869259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.417 [2024-04-19 03:34:24.869284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.417 [2024-04-19 03:34:24.869299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.417 [2024-04-19 03:34:24.869311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.869341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.879106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.879245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.879270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.879286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.879298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.879341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.889161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.889294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.889320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.889335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.889348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.889377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.899141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.899271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.899297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.899312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.899324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.899353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.909187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.909318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.909342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.909357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.909371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.909407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.919211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.919352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.919390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.919408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.919421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.919463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.929249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.929440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.929471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.929499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.929522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.929564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.939301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.939487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.939516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.939531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.939544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.939575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.949320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.949495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.949523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.949538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.949553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.949584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.959361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.959514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.959540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.959556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.959568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.959604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.418 [2024-04-19 03:34:24.969369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.418 [2024-04-19 03:34:24.969509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.418 [2024-04-19 03:34:24.969534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.418 [2024-04-19 03:34:24.969548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.418 [2024-04-19 03:34:24.969561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.418 [2024-04-19 03:34:24.969592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.418 qpair failed and we were unable to recover it. 00:20:47.677 [2024-04-19 03:34:24.979395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.677 [2024-04-19 03:34:24.979531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.677 [2024-04-19 03:34:24.979557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.677 [2024-04-19 03:34:24.979572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.677 [2024-04-19 03:34:24.979584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.677 [2024-04-19 03:34:24.979614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.677 qpair failed and we were unable to recover it. 00:20:47.677 [2024-04-19 03:34:24.989466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.677 [2024-04-19 03:34:24.989623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.677 [2024-04-19 03:34:24.989649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.677 [2024-04-19 03:34:24.989664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.677 [2024-04-19 03:34:24.989677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.677 [2024-04-19 03:34:24.989706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.677 qpair failed and we were unable to recover it. 00:20:47.677 [2024-04-19 03:34:24.999507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:24.999663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:24.999693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:24.999708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:24.999736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:24.999765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.009491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.009626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.009657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.009673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.009686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.009715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.019524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.019661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.019687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.019702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.019715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.019744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.029540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.029689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.029714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.029729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.029742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.029785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.039569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.039742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.039767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.039781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.039810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.039839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.049616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.049764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.049789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.049804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.049817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.049852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.059645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.059794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.059819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.059834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.059847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.059875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.069706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.069853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.069878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.069894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.069906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.069950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.079660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.079792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.079817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.079832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.079845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.079874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.089791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.089943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.089969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.089984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.089996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.090026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.099717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.099868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.099894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.099909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.099922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.099951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.109775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.109910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.678 [2024-04-19 03:34:25.109935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.678 [2024-04-19 03:34:25.109950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.678 [2024-04-19 03:34:25.109963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.678 [2024-04-19 03:34:25.109992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.678 qpair failed and we were unable to recover it. 00:20:47.678 [2024-04-19 03:34:25.119774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.678 [2024-04-19 03:34:25.119919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.119944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.119960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.119972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.120001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.129802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.129930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.129955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.129970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.129983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.130011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.139825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.139955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.139982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.139997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.140016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.140046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.149857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.149990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.150017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.150032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.150045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.150075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.159888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.160033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.160060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.160075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.160088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.160117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.169916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.170042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.170069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.170084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.170096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.170126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.179983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.180113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.180140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.180155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.180167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.180197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.190005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.190148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.190176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.190192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.190205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.190235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.200017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.200154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.200180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.200195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.200208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.200238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.210038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.210176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.210202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.210217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.210230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.210259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.220083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.220222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.220249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.220265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.220278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.220307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.679 [2024-04-19 03:34:25.230103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.679 [2024-04-19 03:34:25.230233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.679 [2024-04-19 03:34:25.230260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.679 [2024-04-19 03:34:25.230284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.679 [2024-04-19 03:34:25.230298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.679 [2024-04-19 03:34:25.230328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.679 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.240127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.240260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.240288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.240304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.240317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.240359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.250166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.250295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.250322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.250337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.250350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.250379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.260199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.260331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.260356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.260370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.260391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.260428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.270237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.270397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.270425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.270440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.270457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.270487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.280237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.280387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.280421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.280436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.280449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.280477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.290267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.290421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.290447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.290462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.290475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.290504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.300335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.300478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.300510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.300524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.300536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.300566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.310344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.310515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.310552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.310568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.310582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.310611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.320354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.320501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.320526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.320549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.320562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.320591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.330373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.330553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.330578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.330593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.330605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.330635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.340409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.340542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.340568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.340583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.340595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.340624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.350455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.350623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.350648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.350662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.350675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.350704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.360502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.360637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.360662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.360676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.360689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.360718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.939 [2024-04-19 03:34:25.370514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.939 [2024-04-19 03:34:25.370688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.939 [2024-04-19 03:34:25.370714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.939 [2024-04-19 03:34:25.370729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.939 [2024-04-19 03:34:25.370741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.939 [2024-04-19 03:34:25.370771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.939 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.380513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.380643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.380668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.380683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.380695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.380725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.390575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.390714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.390743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.390760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.390773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.390818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.400591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.400723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.400749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.400764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.400777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.400807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.410601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.410733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.410763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.410779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.410792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.410821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.420629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.420761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.420786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.420801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.420814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.420844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.430682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.430846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.430872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.430891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.430904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.430935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.440747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.440905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.440933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.440949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.440962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.441004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.450714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.450846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.450873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.450888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.450901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.450937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.460736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.460865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.460891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.460906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.460919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.460948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.470780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.470914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.470940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.470955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.470968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.470997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.480796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.480937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.480962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.480977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.480990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.481019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:47.940 [2024-04-19 03:34:25.490889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.940 [2024-04-19 03:34:25.491020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.940 [2024-04-19 03:34:25.491046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.940 [2024-04-19 03:34:25.491061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.940 [2024-04-19 03:34:25.491073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:47.940 [2024-04-19 03:34:25.491114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.940 qpair failed and we were unable to recover it. 00:20:48.199 [2024-04-19 03:34:25.500962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.501092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.501123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.501139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.501152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.501182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.510889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.511026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.511051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.511066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.511078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.511108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.520920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.521060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.521086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.521106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.521122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.521153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.531001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.531148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.531174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.531189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.531201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.531246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.540978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.541137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.541162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.541177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.541195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.541225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.550997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.551131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.551157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.551171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.551184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.551214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.561044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.561179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.561204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.561219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.561232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.561262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.571071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.571204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.571231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.571246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.571262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.571306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.581084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.581216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.581241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.581256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.581269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.581298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.591144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.591309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.591336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.591355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.591367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.591406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.601143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.601278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.601304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.601319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.601332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.601362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.611168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.611310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.200 [2024-04-19 03:34:25.611335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.200 [2024-04-19 03:34:25.611350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.200 [2024-04-19 03:34:25.611363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.200 [2024-04-19 03:34:25.611399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.200 qpair failed and we were unable to recover it. 00:20:48.200 [2024-04-19 03:34:25.621190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.200 [2024-04-19 03:34:25.621318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.621343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.621358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.621371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b8000b90 00:20:48.201 [2024-04-19 03:34:25.621408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.631264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.201 [2024-04-19 03:34:25.631413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.631445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.631472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.631486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b0000b90 00:20:48.201 [2024-04-19 03:34:25.631518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.641297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.201 [2024-04-19 03:34:25.641435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.641466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.641482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.641495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:48.201 [2024-04-19 03:34:25.641524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.651297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.201 [2024-04-19 03:34:25.651444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.651471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.651485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.651498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f33f30 00:20:48.201 [2024-04-19 03:34:25.651526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.661350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.201 [2024-04-19 03:34:25.661486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.661518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.661533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.661546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04b0000b90 00:20:48.201 [2024-04-19 03:34:25.661577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.671366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.201 [2024-04-19 03:34:25.671559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.671591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.671607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.671620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:48.201 [2024-04-19 03:34:25.671652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.681402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.201 [2024-04-19 03:34:25.681572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.201 [2024-04-19 03:34:25.681599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.201 [2024-04-19 03:34:25.681614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.201 [2024-04-19 03:34:25.681626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f04a8000b90 00:20:48.201 [2024-04-19 03:34:25.681657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.201 qpair failed and we were unable to recover it. 00:20:48.201 [2024-04-19 03:34:25.681757] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:20:48.201 A controller has encountered a failure and is being reset. 00:20:48.201 Controller properly reset. 00:20:48.201 Initializing NVMe Controllers 00:20:48.201 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:48.201 Initialization complete. Launching workers. 00:20:48.201 Starting thread on core 1 00:20:48.201 Starting thread on core 2 00:20:48.201 Starting thread on core 3 00:20:48.201 Starting thread on core 0 00:20:48.201 03:34:25 -- host/target_disconnect.sh@59 -- # sync 00:20:48.201 00:20:48.201 real 0m10.739s 00:20:48.201 user 0m18.042s 00:20:48.201 sys 0m5.375s 00:20:48.201 03:34:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:48.201 03:34:25 -- common/autotest_common.sh@10 -- # set +x 00:20:48.201 ************************************ 00:20:48.201 END TEST nvmf_target_disconnect_tc2 00:20:48.201 ************************************ 00:20:48.201 03:34:25 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:20:48.201 03:34:25 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:48.201 03:34:25 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:20:48.201 03:34:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:48.201 03:34:25 -- nvmf/common.sh@117 -- # sync 00:20:48.201 03:34:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.201 03:34:25 -- nvmf/common.sh@120 -- # set +e 00:20:48.201 03:34:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.201 03:34:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.201 rmmod nvme_tcp 00:20:48.201 rmmod nvme_fabrics 00:20:48.460 rmmod nvme_keyring 00:20:48.460 03:34:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.460 03:34:25 -- nvmf/common.sh@124 -- # set -e 00:20:48.460 03:34:25 -- nvmf/common.sh@125 -- # return 0 00:20:48.460 03:34:25 -- nvmf/common.sh@478 -- # '[' -n 329576 ']' 00:20:48.460 03:34:25 -- nvmf/common.sh@479 -- # killprocess 329576 00:20:48.460 03:34:25 -- common/autotest_common.sh@936 -- # '[' -z 329576 ']' 00:20:48.460 03:34:25 -- common/autotest_common.sh@940 -- # kill -0 329576 00:20:48.460 03:34:25 -- common/autotest_common.sh@941 -- # uname 00:20:48.460 03:34:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.460 03:34:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 329576 00:20:48.460 03:34:25 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:20:48.460 03:34:25 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:20:48.460 03:34:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 329576' 00:20:48.460 killing process with pid 329576 00:20:48.460 03:34:25 -- common/autotest_common.sh@955 -- # kill 329576 00:20:48.460 03:34:25 -- common/autotest_common.sh@960 -- # wait 329576 00:20:48.719 03:34:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:48.719 03:34:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:48.719 03:34:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:48.719 03:34:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.719 03:34:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.719 03:34:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.719 03:34:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.719 03:34:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.626 03:34:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.626 00:20:50.626 real 0m15.627s 00:20:50.626 user 0m43.949s 00:20:50.626 sys 0m7.411s 00:20:50.626 03:34:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:50.626 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:50.626 ************************************ 00:20:50.626 END TEST nvmf_target_disconnect 00:20:50.626 ************************************ 00:20:50.626 03:34:28 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:20:50.626 03:34:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:50.626 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:50.626 03:34:28 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:20:50.626 00:20:50.626 real 15m25.523s 00:20:50.626 user 35m47.658s 00:20:50.626 sys 4m15.113s 00:20:50.626 03:34:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:50.626 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:50.626 ************************************ 00:20:50.626 END TEST nvmf_tcp 00:20:50.626 ************************************ 00:20:50.884 03:34:28 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:20:50.884 03:34:28 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:50.884 03:34:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:50.884 03:34:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:50.884 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:50.884 ************************************ 00:20:50.884 START TEST spdkcli_nvmf_tcp 00:20:50.884 ************************************ 00:20:50.884 03:34:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:50.884 * Looking for test storage... 00:20:50.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:20:50.884 03:34:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:20:50.884 03:34:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.884 03:34:28 -- nvmf/common.sh@7 -- # uname -s 00:20:50.884 03:34:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.884 03:34:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.884 03:34:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.884 03:34:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.884 03:34:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.884 03:34:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.884 03:34:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.884 03:34:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.884 03:34:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.884 03:34:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.884 03:34:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.884 03:34:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.884 03:34:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.884 03:34:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.884 03:34:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.884 03:34:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.884 03:34:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.884 03:34:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.884 03:34:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.884 03:34:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.884 03:34:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.884 03:34:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.884 03:34:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.884 03:34:28 -- paths/export.sh@5 -- # export PATH 00:20:50.884 03:34:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.884 03:34:28 -- nvmf/common.sh@47 -- # : 0 00:20:50.884 03:34:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.884 03:34:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.884 03:34:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.884 03:34:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.884 03:34:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.884 03:34:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.884 03:34:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.884 03:34:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:20:50.884 03:34:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:50.884 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:50.884 03:34:28 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:20:50.884 03:34:28 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=330778 00:20:50.884 03:34:28 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:20:50.884 03:34:28 -- spdkcli/common.sh@34 -- # waitforlisten 330778 00:20:50.884 03:34:28 -- common/autotest_common.sh@817 -- # '[' -z 330778 ']' 00:20:50.885 03:34:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.885 03:34:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:50.885 03:34:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.885 03:34:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:50.885 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:50.885 [2024-04-19 03:34:28.412270] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:20:50.885 [2024-04-19 03:34:28.412370] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330778 ] 00:20:50.885 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.144 [2024-04-19 03:34:28.470410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:51.144 [2024-04-19 03:34:28.578630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.144 [2024-04-19 03:34:28.578634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.144 03:34:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:51.144 03:34:28 -- common/autotest_common.sh@850 -- # return 0 00:20:51.144 03:34:28 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:20:51.144 03:34:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:51.144 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:51.402 03:34:28 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:20:51.402 03:34:28 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:20:51.402 03:34:28 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:20:51.402 03:34:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:51.402 03:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:51.402 03:34:28 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:51.402 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:51.402 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:20:51.402 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:20:51.402 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:20:51.402 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:20:51.402 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:20:51.402 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:51.402 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:51.402 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:20:51.402 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:20:51.402 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:20:51.402 ' 00:20:51.660 [2024-04-19 03:34:29.083634] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:54.193 [2024-04-19 03:34:31.258734] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.127 [2024-04-19 03:34:32.499130] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:20:57.658 [2024-04-19 03:34:34.790226] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:20:59.560 [2024-04-19 03:34:36.736602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:00.936 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:00.936 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:00.936 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:00.936 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:00.936 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:00.936 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:00.936 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:00.937 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:00.937 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:00.937 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:00.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:00.937 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:00.937 03:34:38 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:00.937 03:34:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.937 03:34:38 -- common/autotest_common.sh@10 -- # set +x 00:21:00.937 03:34:38 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:00.937 03:34:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.937 03:34:38 -- common/autotest_common.sh@10 -- # set +x 00:21:00.937 03:34:38 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:00.937 03:34:38 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:01.503 03:34:38 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:01.503 03:34:38 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:01.503 03:34:38 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:01.503 03:34:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.503 03:34:38 -- common/autotest_common.sh@10 -- # set +x 00:21:01.503 03:34:38 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:01.503 03:34:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:01.503 03:34:38 -- common/autotest_common.sh@10 -- # set +x 00:21:01.503 03:34:38 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:01.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:01.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:01.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:01.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:01.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:01.503 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:01.503 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:01.503 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:01.503 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:01.503 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:01.503 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:01.503 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:01.503 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:01.503 ' 00:21:06.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:06.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:06.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:06.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:06.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:06.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:06.771 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:06.771 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:06.771 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:06.771 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:06.771 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:06.771 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:06.771 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:06.771 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:06.771 03:34:44 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:06.771 03:34:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:06.771 03:34:44 -- common/autotest_common.sh@10 -- # set +x 00:21:06.771 03:34:44 -- spdkcli/nvmf.sh@90 -- # killprocess 330778 00:21:06.771 03:34:44 -- common/autotest_common.sh@936 -- # '[' -z 330778 ']' 00:21:06.771 03:34:44 -- common/autotest_common.sh@940 -- # kill -0 330778 00:21:06.771 03:34:44 -- common/autotest_common.sh@941 -- # uname 00:21:06.771 03:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.771 03:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 330778 00:21:06.771 03:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:06.771 03:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:06.771 03:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 330778' 00:21:06.771 killing process with pid 330778 00:21:06.771 03:34:44 -- common/autotest_common.sh@955 -- # kill 330778 00:21:06.771 [2024-04-19 03:34:44.120355] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:06.771 03:34:44 -- common/autotest_common.sh@960 -- # wait 330778 00:21:07.030 03:34:44 -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:07.030 03:34:44 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:07.030 03:34:44 -- spdkcli/common.sh@13 -- # '[' -n 330778 ']' 00:21:07.030 03:34:44 -- spdkcli/common.sh@14 -- # killprocess 330778 00:21:07.030 03:34:44 -- common/autotest_common.sh@936 -- # '[' -z 330778 ']' 00:21:07.030 03:34:44 -- common/autotest_common.sh@940 -- # kill -0 330778 00:21:07.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (330778) - No such process 00:21:07.030 03:34:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 330778 is not found' 00:21:07.030 Process with pid 330778 is not found 00:21:07.030 03:34:44 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:07.030 03:34:44 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:07.030 03:34:44 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:07.030 00:21:07.030 real 0m16.070s 00:21:07.030 user 0m33.915s 00:21:07.030 sys 0m0.821s 00:21:07.030 03:34:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:07.030 03:34:44 -- common/autotest_common.sh@10 -- # set +x 00:21:07.030 ************************************ 00:21:07.030 END TEST spdkcli_nvmf_tcp 00:21:07.030 ************************************ 00:21:07.030 03:34:44 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:07.030 03:34:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:07.030 03:34:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.030 03:34:44 -- common/autotest_common.sh@10 -- # set +x 00:21:07.030 ************************************ 00:21:07.030 START TEST nvmf_identify_passthru 00:21:07.030 ************************************ 00:21:07.030 03:34:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:07.030 * Looking for test storage... 00:21:07.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.030 03:34:44 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.030 03:34:44 -- nvmf/common.sh@7 -- # uname -s 00:21:07.030 03:34:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.030 03:34:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.030 03:34:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.030 03:34:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.030 03:34:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.030 03:34:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.030 03:34:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.030 03:34:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.030 03:34:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.030 03:34:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.030 03:34:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.030 03:34:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.030 03:34:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.030 03:34:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.030 03:34:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.030 03:34:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.030 03:34:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.030 03:34:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.030 03:34:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.030 03:34:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.030 03:34:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.030 03:34:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.030 03:34:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.030 03:34:44 -- paths/export.sh@5 -- # export PATH 00:21:07.030 03:34:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.030 03:34:44 -- nvmf/common.sh@47 -- # : 0 00:21:07.030 03:34:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.030 03:34:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.030 03:34:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.030 03:34:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.030 03:34:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.030 03:34:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.030 03:34:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.030 03:34:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.030 03:34:44 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.030 03:34:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.030 03:34:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.030 03:34:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.030 03:34:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.030 03:34:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.031 03:34:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.031 03:34:44 -- paths/export.sh@5 -- # export PATH 00:21:07.031 03:34:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.031 03:34:44 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:07.031 03:34:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:07.031 03:34:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.031 03:34:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:07.031 03:34:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:07.031 03:34:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:07.031 03:34:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.031 03:34:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:07.031 03:34:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.031 03:34:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:07.031 03:34:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:07.031 03:34:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.031 03:34:44 -- common/autotest_common.sh@10 -- # set +x 00:21:08.933 03:34:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:08.933 03:34:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.933 03:34:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.933 03:34:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.933 03:34:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.933 03:34:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.933 03:34:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.933 03:34:46 -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.933 03:34:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.933 03:34:46 -- nvmf/common.sh@296 -- # e810=() 00:21:08.933 03:34:46 -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.933 03:34:46 -- nvmf/common.sh@297 -- # x722=() 00:21:08.933 03:34:46 -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.933 03:34:46 -- nvmf/common.sh@298 -- # mlx=() 00:21:08.933 03:34:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.933 03:34:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.933 03:34:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.933 03:34:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.933 03:34:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.933 03:34:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.933 03:34:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:08.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:08.933 03:34:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.933 03:34:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.933 03:34:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:08.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:08.934 03:34:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.934 03:34:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.934 03:34:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.934 03:34:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:08.934 03:34:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.934 03:34:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:08.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:08.934 03:34:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.934 03:34:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.934 03:34:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.934 03:34:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:08.934 03:34:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.934 03:34:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:08.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:08.934 03:34:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.934 03:34:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:08.934 03:34:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:08.934 03:34:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:08.934 03:34:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:08.934 03:34:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.934 03:34:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.934 03:34:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.934 03:34:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.934 03:34:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.934 03:34:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.934 03:34:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.934 03:34:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.934 03:34:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.934 03:34:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.934 03:34:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.934 03:34:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.934 03:34:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.193 03:34:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.193 03:34:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.193 03:34:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:09.193 03:34:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.193 03:34:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.193 03:34:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.193 03:34:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:09.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:21:09.193 00:21:09.193 --- 10.0.0.2 ping statistics --- 00:21:09.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.193 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:21:09.193 03:34:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:21:09.193 00:21:09.193 --- 10.0.0.1 ping statistics --- 00:21:09.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.193 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:21:09.193 03:34:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.193 03:34:46 -- nvmf/common.sh@411 -- # return 0 00:21:09.193 03:34:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:09.193 03:34:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.193 03:34:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:09.193 03:34:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:09.193 03:34:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.193 03:34:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:09.193 03:34:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:09.193 03:34:46 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:09.193 03:34:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.193 03:34:46 -- common/autotest_common.sh@10 -- # set +x 00:21:09.193 03:34:46 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:09.193 03:34:46 -- common/autotest_common.sh@1510 -- # bdfs=() 00:21:09.193 03:34:46 -- common/autotest_common.sh@1510 -- # local bdfs 00:21:09.193 03:34:46 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:21:09.193 03:34:46 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:21:09.193 03:34:46 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:09.193 03:34:46 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:09.193 03:34:46 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:09.193 03:34:46 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:09.193 03:34:46 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:09.193 03:34:46 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:21:09.193 03:34:46 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:21:09.193 03:34:46 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:21:09.193 03:34:46 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:21:09.193 03:34:46 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:21:09.193 03:34:46 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:21:09.193 03:34:46 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:09.193 03:34:46 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:09.193 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.395 03:34:50 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:21:13.395 03:34:50 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:21:13.395 03:34:50 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:13.395 03:34:50 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:13.395 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.612 03:34:55 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:21:17.612 03:34:55 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:17.612 03:34:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:17.612 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.612 03:34:55 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:17.612 03:34:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.612 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.612 03:34:55 -- target/identify_passthru.sh@31 -- # nvmfpid=335308 00:21:17.612 03:34:55 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:17.612 03:34:55 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.612 03:34:55 -- target/identify_passthru.sh@35 -- # waitforlisten 335308 00:21:17.612 03:34:55 -- common/autotest_common.sh@817 -- # '[' -z 335308 ']' 00:21:17.612 03:34:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.612 03:34:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.612 03:34:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.612 03:34:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.612 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.612 [2024-04-19 03:34:55.139867] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:21:17.612 [2024-04-19 03:34:55.139964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.870 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.870 [2024-04-19 03:34:55.205447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.870 [2024-04-19 03:34:55.311102] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.870 [2024-04-19 03:34:55.311159] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.870 [2024-04-19 03:34:55.311194] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.870 [2024-04-19 03:34:55.311205] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.870 [2024-04-19 03:34:55.311215] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.870 [2024-04-19 03:34:55.311298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.870 [2024-04-19 03:34:55.311363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.870 [2024-04-19 03:34:55.311434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.870 [2024-04-19 03:34:55.311438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.870 03:34:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:17.870 03:34:55 -- common/autotest_common.sh@850 -- # return 0 00:21:17.870 03:34:55 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:17.870 03:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.870 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.870 INFO: Log level set to 20 00:21:17.870 INFO: Requests: 00:21:17.870 { 00:21:17.870 "jsonrpc": "2.0", 00:21:17.870 "method": "nvmf_set_config", 00:21:17.870 "id": 1, 00:21:17.870 "params": { 00:21:17.870 "admin_cmd_passthru": { 00:21:17.870 "identify_ctrlr": true 00:21:17.870 } 00:21:17.870 } 00:21:17.870 } 00:21:17.870 00:21:17.870 INFO: response: 00:21:17.870 { 00:21:17.870 "jsonrpc": "2.0", 00:21:17.870 "id": 1, 00:21:17.870 "result": true 00:21:17.870 } 00:21:17.870 00:21:17.870 03:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.870 03:34:55 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:17.870 03:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.870 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.870 INFO: Setting log level to 20 00:21:17.870 INFO: Setting log level to 20 00:21:17.870 INFO: Log level set to 20 00:21:17.870 INFO: Log level set to 20 00:21:17.870 INFO: Requests: 00:21:17.870 { 00:21:17.870 "jsonrpc": "2.0", 00:21:17.870 "method": "framework_start_init", 00:21:17.870 "id": 1 00:21:17.870 } 00:21:17.870 00:21:17.870 INFO: Requests: 00:21:17.870 { 00:21:17.870 "jsonrpc": "2.0", 00:21:17.870 "method": "framework_start_init", 00:21:17.870 "id": 1 00:21:17.870 } 00:21:17.870 00:21:18.128 [2024-04-19 03:34:55.452578] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:18.128 INFO: response: 00:21:18.128 { 00:21:18.128 "jsonrpc": "2.0", 00:21:18.128 "id": 1, 00:21:18.128 "result": true 00:21:18.128 } 00:21:18.128 00:21:18.128 INFO: response: 00:21:18.128 { 00:21:18.128 "jsonrpc": "2.0", 00:21:18.128 "id": 1, 00:21:18.128 "result": true 00:21:18.128 } 00:21:18.128 00:21:18.128 03:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.128 03:34:55 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.128 03:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.128 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:18.128 INFO: Setting log level to 40 00:21:18.128 INFO: Setting log level to 40 00:21:18.128 INFO: Setting log level to 40 00:21:18.128 [2024-04-19 03:34:55.462555] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.128 03:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.128 03:34:55 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:18.128 03:34:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.128 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:18.128 03:34:55 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:21:18.128 03:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.128 03:34:55 -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 Nvme0n1 00:21:21.406 03:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.406 03:34:58 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:21.406 03:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.406 03:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 03:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.406 03:34:58 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:21.406 03:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.406 03:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 03:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.406 03:34:58 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.406 03:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.406 03:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 [2024-04-19 03:34:58.352705] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.406 03:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.406 03:34:58 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:21.406 03:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.406 03:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 [2024-04-19 03:34:58.360416] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:21.406 [ 00:21:21.406 { 00:21:21.406 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:21.406 "subtype": "Discovery", 00:21:21.406 "listen_addresses": [], 00:21:21.406 "allow_any_host": true, 00:21:21.406 "hosts": [] 00:21:21.406 }, 00:21:21.406 { 00:21:21.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.406 "subtype": "NVMe", 00:21:21.406 "listen_addresses": [ 00:21:21.406 { 00:21:21.406 "transport": "TCP", 00:21:21.406 "trtype": "TCP", 00:21:21.406 "adrfam": "IPv4", 00:21:21.406 "traddr": "10.0.0.2", 00:21:21.406 "trsvcid": "4420" 00:21:21.406 } 00:21:21.406 ], 00:21:21.406 "allow_any_host": true, 00:21:21.406 "hosts": [], 00:21:21.406 "serial_number": "SPDK00000000000001", 00:21:21.406 "model_number": "SPDK bdev Controller", 00:21:21.406 "max_namespaces": 1, 00:21:21.406 "min_cntlid": 1, 00:21:21.406 "max_cntlid": 65519, 00:21:21.406 "namespaces": [ 00:21:21.406 { 00:21:21.406 "nsid": 1, 00:21:21.406 "bdev_name": "Nvme0n1", 00:21:21.406 "name": "Nvme0n1", 00:21:21.406 "nguid": "369FD8F97D3B4104ADC873479CD00F46", 00:21:21.406 "uuid": "369fd8f9-7d3b-4104-adc8-73479cd00f46" 00:21:21.406 } 00:21:21.406 ] 00:21:21.406 } 00:21:21.406 ] 00:21:21.406 03:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.406 03:34:58 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:21.406 03:34:58 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:21.406 03:34:58 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:21.406 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.406 03:34:58 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:21:21.406 03:34:58 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:21.406 03:34:58 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:21.406 03:34:58 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:21.406 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.406 03:34:58 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:21:21.406 03:34:58 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:21:21.406 03:34:58 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:21:21.406 03:34:58 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.406 03:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.406 03:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 03:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.407 03:34:58 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:21.407 03:34:58 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:21.407 03:34:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:21.407 03:34:58 -- nvmf/common.sh@117 -- # sync 00:21:21.407 03:34:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.407 03:34:58 -- nvmf/common.sh@120 -- # set +e 00:21:21.407 03:34:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.407 03:34:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.407 rmmod nvme_tcp 00:21:21.407 rmmod nvme_fabrics 00:21:21.407 rmmod nvme_keyring 00:21:21.407 03:34:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.407 03:34:58 -- nvmf/common.sh@124 -- # set -e 00:21:21.407 03:34:58 -- nvmf/common.sh@125 -- # return 0 00:21:21.407 03:34:58 -- nvmf/common.sh@478 -- # '[' -n 335308 ']' 00:21:21.407 03:34:58 -- nvmf/common.sh@479 -- # killprocess 335308 00:21:21.407 03:34:58 -- common/autotest_common.sh@936 -- # '[' -z 335308 ']' 00:21:21.407 03:34:58 -- common/autotest_common.sh@940 -- # kill -0 335308 00:21:21.407 03:34:58 -- common/autotest_common.sh@941 -- # uname 00:21:21.407 03:34:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.407 03:34:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 335308 00:21:21.407 03:34:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:21.407 03:34:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:21.407 03:34:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 335308' 00:21:21.407 killing process with pid 335308 00:21:21.407 03:34:58 -- common/autotest_common.sh@955 -- # kill 335308 00:21:21.407 [2024-04-19 03:34:58.836778] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:21.407 03:34:58 -- common/autotest_common.sh@960 -- # wait 335308 00:21:23.305 03:35:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:23.305 03:35:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:23.305 03:35:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:23.305 03:35:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.305 03:35:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.305 03:35:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.305 03:35:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:23.305 03:35:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.209 03:35:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.209 00:21:25.209 real 0m17.987s 00:21:25.209 user 0m26.767s 00:21:25.209 sys 0m2.250s 00:21:25.209 03:35:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.209 03:35:02 -- common/autotest_common.sh@10 -- # set +x 00:21:25.209 ************************************ 00:21:25.209 END TEST nvmf_identify_passthru 00:21:25.209 ************************************ 00:21:25.209 03:35:02 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:25.209 03:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:25.209 03:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.209 03:35:02 -- common/autotest_common.sh@10 -- # set +x 00:21:25.209 ************************************ 00:21:25.209 START TEST nvmf_dif 00:21:25.209 ************************************ 00:21:25.209 03:35:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:25.209 * Looking for test storage... 00:21:25.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.209 03:35:02 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.209 03:35:02 -- nvmf/common.sh@7 -- # uname -s 00:21:25.209 03:35:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.209 03:35:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.209 03:35:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.209 03:35:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.209 03:35:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.209 03:35:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.209 03:35:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.209 03:35:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.209 03:35:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.209 03:35:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.209 03:35:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.209 03:35:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.209 03:35:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.209 03:35:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.209 03:35:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.209 03:35:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.209 03:35:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.209 03:35:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.209 03:35:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.209 03:35:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.209 03:35:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.209 03:35:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.209 03:35:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.209 03:35:02 -- paths/export.sh@5 -- # export PATH 00:21:25.209 03:35:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.209 03:35:02 -- nvmf/common.sh@47 -- # : 0 00:21:25.209 03:35:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.209 03:35:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.209 03:35:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.209 03:35:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.209 03:35:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.209 03:35:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.209 03:35:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.209 03:35:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.209 03:35:02 -- target/dif.sh@15 -- # NULL_META=16 00:21:25.209 03:35:02 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:25.210 03:35:02 -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:25.210 03:35:02 -- target/dif.sh@15 -- # NULL_DIF=1 00:21:25.210 03:35:02 -- target/dif.sh@135 -- # nvmftestinit 00:21:25.210 03:35:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.210 03:35:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.210 03:35:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.210 03:35:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.210 03:35:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.210 03:35:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.210 03:35:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.210 03:35:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.210 03:35:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.210 03:35:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.210 03:35:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.210 03:35:02 -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 03:35:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:27.116 03:35:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.116 03:35:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.116 03:35:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.116 03:35:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.116 03:35:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.116 03:35:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.116 03:35:04 -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.116 03:35:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.116 03:35:04 -- nvmf/common.sh@296 -- # e810=() 00:21:27.116 03:35:04 -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.116 03:35:04 -- nvmf/common.sh@297 -- # x722=() 00:21:27.116 03:35:04 -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.116 03:35:04 -- nvmf/common.sh@298 -- # mlx=() 00:21:27.116 03:35:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.116 03:35:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.116 03:35:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.116 03:35:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.116 03:35:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.116 03:35:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.116 03:35:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:27.116 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:27.116 03:35:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.116 03:35:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:27.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:27.116 03:35:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.116 03:35:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.116 03:35:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.116 03:35:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.116 03:35:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.116 03:35:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.116 03:35:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:27.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:27.116 03:35:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.116 03:35:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.116 03:35:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.116 03:35:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.117 03:35:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.117 03:35:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:27.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:27.117 03:35:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.117 03:35:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:27.117 03:35:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:27.117 03:35:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:27.117 03:35:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:27.117 03:35:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:27.117 03:35:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.117 03:35:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.117 03:35:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.117 03:35:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.117 03:35:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.117 03:35:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.117 03:35:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.117 03:35:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.117 03:35:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.117 03:35:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.117 03:35:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.117 03:35:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.117 03:35:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.117 03:35:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.117 03:35:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.117 03:35:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.117 03:35:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.117 03:35:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.117 03:35:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.117 03:35:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:21:27.117 00:21:27.117 --- 10.0.0.2 ping statistics --- 00:21:27.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.117 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:27.117 03:35:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:21:27.117 00:21:27.117 --- 10.0.0.1 ping statistics --- 00:21:27.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.117 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:27.117 03:35:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.117 03:35:04 -- nvmf/common.sh@411 -- # return 0 00:21:27.117 03:35:04 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:21:27.117 03:35:04 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:28.493 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:28.493 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:28.493 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:28.493 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:28.493 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:28.493 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:28.493 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:28.493 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:28.493 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:28.493 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:28.493 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:28.493 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:28.493 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:28.493 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:28.493 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:28.493 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:28.493 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:28.493 03:35:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.493 03:35:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:28.493 03:35:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:28.493 03:35:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.493 03:35:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:28.493 03:35:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:28.493 03:35:05 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:28.493 03:35:05 -- target/dif.sh@137 -- # nvmfappstart 00:21:28.493 03:35:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:28.493 03:35:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:28.493 03:35:05 -- common/autotest_common.sh@10 -- # set +x 00:21:28.493 03:35:05 -- nvmf/common.sh@470 -- # nvmfpid=338572 00:21:28.493 03:35:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:28.493 03:35:05 -- nvmf/common.sh@471 -- # waitforlisten 338572 00:21:28.493 03:35:05 -- common/autotest_common.sh@817 -- # '[' -z 338572 ']' 00:21:28.493 03:35:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.493 03:35:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:28.493 03:35:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.494 03:35:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:28.494 03:35:05 -- common/autotest_common.sh@10 -- # set +x 00:21:28.494 [2024-04-19 03:35:05.883959] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:21:28.494 [2024-04-19 03:35:05.884028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.494 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.494 [2024-04-19 03:35:05.945910] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.494 [2024-04-19 03:35:06.048244] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.494 [2024-04-19 03:35:06.048295] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.494 [2024-04-19 03:35:06.048309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.494 [2024-04-19 03:35:06.048335] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.494 [2024-04-19 03:35:06.048345] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.494 [2024-04-19 03:35:06.048375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.752 03:35:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.752 03:35:06 -- common/autotest_common.sh@850 -- # return 0 00:21:28.752 03:35:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:28.752 03:35:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:28.752 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:28.752 03:35:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.752 03:35:06 -- target/dif.sh@139 -- # create_transport 00:21:28.752 03:35:06 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:28.752 03:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.752 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:28.752 [2024-04-19 03:35:06.188385] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.752 03:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.752 03:35:06 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:28.752 03:35:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:28.752 03:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:28.752 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:28.752 ************************************ 00:21:28.752 START TEST fio_dif_1_default 00:21:28.752 ************************************ 00:21:28.752 03:35:06 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:21:28.752 03:35:06 -- target/dif.sh@86 -- # create_subsystems 0 00:21:28.752 03:35:06 -- target/dif.sh@28 -- # local sub 00:21:28.752 03:35:06 -- target/dif.sh@30 -- # for sub in "$@" 00:21:28.752 03:35:06 -- target/dif.sh@31 -- # create_subsystem 0 00:21:28.752 03:35:06 -- target/dif.sh@18 -- # local sub_id=0 00:21:28.752 03:35:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:28.752 03:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.752 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:28.752 bdev_null0 00:21:28.752 03:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.752 03:35:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:28.752 03:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.752 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:29.010 03:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.010 03:35:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:29.010 03:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.010 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:29.010 03:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.011 03:35:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:29.011 03:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.011 03:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:29.011 [2024-04-19 03:35:06.324930] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.011 03:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.011 03:35:06 -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:29.011 03:35:06 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:29.011 03:35:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:29.011 03:35:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:29.011 03:35:06 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:29.011 03:35:06 -- nvmf/common.sh@521 -- # config=() 00:21:29.011 03:35:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:29.011 03:35:06 -- target/dif.sh@82 -- # gen_fio_conf 00:21:29.011 03:35:06 -- nvmf/common.sh@521 -- # local subsystem config 00:21:29.011 03:35:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:29.011 03:35:06 -- target/dif.sh@54 -- # local file 00:21:29.011 03:35:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:29.011 03:35:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:29.011 03:35:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:29.011 03:35:06 -- target/dif.sh@56 -- # cat 00:21:29.011 03:35:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:29.011 { 00:21:29.011 "params": { 00:21:29.011 "name": "Nvme$subsystem", 00:21:29.011 "trtype": "$TEST_TRANSPORT", 00:21:29.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.011 "adrfam": "ipv4", 00:21:29.011 "trsvcid": "$NVMF_PORT", 00:21:29.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.011 "hdgst": ${hdgst:-false}, 00:21:29.011 "ddgst": ${ddgst:-false} 00:21:29.011 }, 00:21:29.011 "method": "bdev_nvme_attach_controller" 00:21:29.011 } 00:21:29.011 EOF 00:21:29.011 )") 00:21:29.011 03:35:06 -- common/autotest_common.sh@1327 -- # shift 00:21:29.011 03:35:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:29.011 03:35:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.011 03:35:06 -- nvmf/common.sh@543 -- # cat 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:29.011 03:35:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:29.011 03:35:06 -- target/dif.sh@72 -- # (( file <= files )) 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:29.011 03:35:06 -- nvmf/common.sh@545 -- # jq . 00:21:29.011 03:35:06 -- nvmf/common.sh@546 -- # IFS=, 00:21:29.011 03:35:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:29.011 "params": { 00:21:29.011 "name": "Nvme0", 00:21:29.011 "trtype": "tcp", 00:21:29.011 "traddr": "10.0.0.2", 00:21:29.011 "adrfam": "ipv4", 00:21:29.011 "trsvcid": "4420", 00:21:29.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:29.011 "hdgst": false, 00:21:29.011 "ddgst": false 00:21:29.011 }, 00:21:29.011 "method": "bdev_nvme_attach_controller" 00:21:29.011 }' 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:29.011 03:35:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:29.011 03:35:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:29.011 03:35:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:29.011 03:35:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:29.011 03:35:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:29.011 03:35:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:29.270 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:29.270 fio-3.35 00:21:29.270 Starting 1 thread 00:21:29.270 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.476 00:21:41.476 filename0: (groupid=0, jobs=1): err= 0: pid=338803: Fri Apr 19 03:35:17 2024 00:21:41.476 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:21:41.476 slat (nsec): min=6843, max=87225, avg=9217.30, stdev=4583.45 00:21:41.476 clat (usec): min=40858, max=47371, avg=41055.20, stdev=471.32 00:21:41.476 lat (usec): min=40866, max=47405, avg=41064.41, stdev=471.94 00:21:41.476 clat percentiles (usec): 00:21:41.476 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:21:41.476 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:41.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:21:41.476 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:21:41.476 | 99.99th=[47449] 00:21:41.476 bw ( KiB/s): min= 384, max= 416, per=99.63%, avg=388.80, stdev=11.72, samples=20 00:21:41.476 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:21:41.476 lat (msec) : 50=100.00% 00:21:41.476 cpu : usr=88.90%, sys=10.83%, ctx=16, majf=0, minf=249 00:21:41.476 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.476 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.476 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:41.476 00:21:41.476 Run status group 0 (all jobs): 00:21:41.476 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10025-10025msec 00:21:41.476 03:35:17 -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:41.476 03:35:17 -- target/dif.sh@43 -- # local sub 00:21:41.476 03:35:17 -- target/dif.sh@45 -- # for sub in "$@" 00:21:41.476 03:35:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:41.476 03:35:17 -- target/dif.sh@36 -- # local sub_id=0 00:21:41.476 03:35:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:41.476 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.476 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.476 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.476 03:35:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:41.476 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.476 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.476 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.476 00:21:41.477 real 0m11.188s 00:21:41.477 user 0m10.009s 00:21:41.477 sys 0m1.352s 00:21:41.477 03:35:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 ************************************ 00:21:41.477 END TEST fio_dif_1_default 00:21:41.477 ************************************ 00:21:41.477 03:35:17 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:41.477 03:35:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:41.477 03:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 ************************************ 00:21:41.477 START TEST fio_dif_1_multi_subsystems 00:21:41.477 ************************************ 00:21:41.477 03:35:17 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:21:41.477 03:35:17 -- target/dif.sh@92 -- # local files=1 00:21:41.477 03:35:17 -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:41.477 03:35:17 -- target/dif.sh@28 -- # local sub 00:21:41.477 03:35:17 -- target/dif.sh@30 -- # for sub in "$@" 00:21:41.477 03:35:17 -- target/dif.sh@31 -- # create_subsystem 0 00:21:41.477 03:35:17 -- target/dif.sh@18 -- # local sub_id=0 00:21:41.477 03:35:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 bdev_null0 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 [2024-04-19 03:35:17.639341] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@30 -- # for sub in "$@" 00:21:41.477 03:35:17 -- target/dif.sh@31 -- # create_subsystem 1 00:21:41.477 03:35:17 -- target/dif.sh@18 -- # local sub_id=1 00:21:41.477 03:35:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 bdev_null1 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.477 03:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.477 03:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 03:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.477 03:35:17 -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:41.477 03:35:17 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:41.477 03:35:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:41.477 03:35:17 -- nvmf/common.sh@521 -- # config=() 00:21:41.477 03:35:17 -- nvmf/common.sh@521 -- # local subsystem config 00:21:41.477 03:35:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:41.477 03:35:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:41.477 { 00:21:41.477 "params": { 00:21:41.477 "name": "Nvme$subsystem", 00:21:41.477 "trtype": "$TEST_TRANSPORT", 00:21:41.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.477 "adrfam": "ipv4", 00:21:41.477 "trsvcid": "$NVMF_PORT", 00:21:41.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.477 "hdgst": ${hdgst:-false}, 00:21:41.477 "ddgst": ${ddgst:-false} 00:21:41.477 }, 00:21:41.477 "method": "bdev_nvme_attach_controller" 00:21:41.477 } 00:21:41.477 EOF 00:21:41.477 )") 00:21:41.477 03:35:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:41.477 03:35:17 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:41.477 03:35:17 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:41.477 03:35:17 -- target/dif.sh@82 -- # gen_fio_conf 00:21:41.477 03:35:17 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:41.477 03:35:17 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:41.477 03:35:17 -- target/dif.sh@54 -- # local file 00:21:41.477 03:35:17 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:41.477 03:35:17 -- target/dif.sh@56 -- # cat 00:21:41.477 03:35:17 -- common/autotest_common.sh@1327 -- # shift 00:21:41.477 03:35:17 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:41.477 03:35:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:41.477 03:35:17 -- nvmf/common.sh@543 -- # cat 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:41.477 03:35:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:41.477 03:35:17 -- target/dif.sh@72 -- # (( file <= files )) 00:21:41.477 03:35:17 -- target/dif.sh@73 -- # cat 00:21:41.477 03:35:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:41.477 03:35:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:41.477 { 00:21:41.477 "params": { 00:21:41.477 "name": "Nvme$subsystem", 00:21:41.477 "trtype": "$TEST_TRANSPORT", 00:21:41.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.477 "adrfam": "ipv4", 00:21:41.477 "trsvcid": "$NVMF_PORT", 00:21:41.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.477 "hdgst": ${hdgst:-false}, 00:21:41.477 "ddgst": ${ddgst:-false} 00:21:41.477 }, 00:21:41.477 "method": "bdev_nvme_attach_controller" 00:21:41.477 } 00:21:41.477 EOF 00:21:41.477 )") 00:21:41.477 03:35:17 -- nvmf/common.sh@543 -- # cat 00:21:41.477 03:35:17 -- target/dif.sh@72 -- # (( file++ )) 00:21:41.477 03:35:17 -- target/dif.sh@72 -- # (( file <= files )) 00:21:41.477 03:35:17 -- nvmf/common.sh@545 -- # jq . 00:21:41.477 03:35:17 -- nvmf/common.sh@546 -- # IFS=, 00:21:41.477 03:35:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:41.477 "params": { 00:21:41.477 "name": "Nvme0", 00:21:41.477 "trtype": "tcp", 00:21:41.477 "traddr": "10.0.0.2", 00:21:41.477 "adrfam": "ipv4", 00:21:41.477 "trsvcid": "4420", 00:21:41.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:41.477 "hdgst": false, 00:21:41.477 "ddgst": false 00:21:41.477 }, 00:21:41.477 "method": "bdev_nvme_attach_controller" 00:21:41.477 },{ 00:21:41.477 "params": { 00:21:41.477 "name": "Nvme1", 00:21:41.477 "trtype": "tcp", 00:21:41.477 "traddr": "10.0.0.2", 00:21:41.477 "adrfam": "ipv4", 00:21:41.477 "trsvcid": "4420", 00:21:41.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.477 "hdgst": false, 00:21:41.477 "ddgst": false 00:21:41.477 }, 00:21:41.477 "method": "bdev_nvme_attach_controller" 00:21:41.477 }' 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:41.477 03:35:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:41.477 03:35:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:41.477 03:35:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:41.477 03:35:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:41.477 03:35:17 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:41.477 03:35:17 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:41.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:41.477 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:41.477 fio-3.35 00:21:41.477 Starting 2 threads 00:21:41.477 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.519 00:21:51.519 filename0: (groupid=0, jobs=1): err= 0: pid=340223: Fri Apr 19 03:35:28 2024 00:21:51.519 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10009msec) 00:21:51.519 slat (nsec): min=7106, max=38837, avg=9017.30, stdev=2973.00 00:21:51.519 clat (usec): min=40828, max=42969, avg=41330.78, stdev=486.50 00:21:51.519 lat (usec): min=40836, max=42982, avg=41339.80, stdev=486.65 00:21:51.519 clat percentiles (usec): 00:21:51.519 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:21:51.519 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:51.519 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:51.519 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:51.519 | 99.99th=[42730] 00:21:51.519 bw ( KiB/s): min= 352, max= 416, per=33.59%, avg=385.60, stdev=12.61, samples=20 00:21:51.519 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:21:51.519 lat (msec) : 50=100.00% 00:21:51.519 cpu : usr=94.35%, sys=5.35%, ctx=19, majf=0, minf=116 00:21:51.519 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.519 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.519 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:51.519 filename1: (groupid=0, jobs=1): err= 0: pid=340224: Fri Apr 19 03:35:28 2024 00:21:51.519 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:21:51.519 slat (nsec): min=7063, max=70468, avg=8707.91, stdev=2864.30 00:21:51.519 clat (usec): min=786, max=42795, avg=21031.84, stdev=20129.70 00:21:51.519 lat (usec): min=793, max=42823, avg=21040.55, stdev=20129.53 00:21:51.519 clat percentiles (usec): 00:21:51.519 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 840], 20.00th=[ 857], 00:21:51.519 | 30.00th=[ 865], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:21:51.519 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:51.519 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:51.519 | 99.99th=[42730] 00:21:51.519 bw ( KiB/s): min= 704, max= 768, per=66.13%, avg=758.40, stdev=21.02, samples=20 00:21:51.519 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:21:51.519 lat (usec) : 1000=49.84% 00:21:51.519 lat (msec) : 2=0.05%, 50=50.11% 00:21:51.519 cpu : usr=94.44%, sys=5.26%, ctx=17, majf=0, minf=205 00:21:51.519 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.519 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.519 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:51.519 00:21:51.519 Run status group 0 (all jobs): 00:21:51.519 READ: bw=1146KiB/s (1174kB/s), 387KiB/s-760KiB/s (396kB/s-778kB/s), io=11.2MiB (11.7MB), run=10003-10009msec 00:21:51.519 03:35:28 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:51.519 03:35:28 -- target/dif.sh@43 -- # local sub 00:21:51.519 03:35:28 -- target/dif.sh@45 -- # for sub in "$@" 00:21:51.519 03:35:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:51.519 03:35:28 -- target/dif.sh@36 -- # local sub_id=0 00:21:51.519 03:35:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:51.519 03:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.519 03:35:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 03:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.519 03:35:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:51.519 03:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.519 03:35:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 03:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.519 03:35:28 -- target/dif.sh@45 -- # for sub in "$@" 00:21:51.519 03:35:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:51.519 03:35:28 -- target/dif.sh@36 -- # local sub_id=1 00:21:51.519 03:35:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.519 03:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.519 03:35:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 03:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.519 03:35:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:51.519 03:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.519 03:35:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 03:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.519 00:21:51.519 real 0m11.379s 00:21:51.519 user 0m20.345s 00:21:51.519 sys 0m1.341s 00:21:51.519 03:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:51.519 03:35:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 ************************************ 00:21:51.519 END TEST fio_dif_1_multi_subsystems 00:21:51.519 ************************************ 00:21:51.519 03:35:29 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:51.519 03:35:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:51.519 03:35:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.519 03:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 ************************************ 00:21:51.778 START TEST fio_dif_rand_params 00:21:51.778 ************************************ 00:21:51.778 03:35:29 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:21:51.778 03:35:29 -- target/dif.sh@100 -- # local NULL_DIF 00:21:51.778 03:35:29 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:51.778 03:35:29 -- target/dif.sh@103 -- # NULL_DIF=3 00:21:51.778 03:35:29 -- target/dif.sh@103 -- # bs=128k 00:21:51.778 03:35:29 -- target/dif.sh@103 -- # numjobs=3 00:21:51.778 03:35:29 -- target/dif.sh@103 -- # iodepth=3 00:21:51.778 03:35:29 -- target/dif.sh@103 -- # runtime=5 00:21:51.778 03:35:29 -- target/dif.sh@105 -- # create_subsystems 0 00:21:51.778 03:35:29 -- target/dif.sh@28 -- # local sub 00:21:51.778 03:35:29 -- target/dif.sh@30 -- # for sub in "$@" 00:21:51.778 03:35:29 -- target/dif.sh@31 -- # create_subsystem 0 00:21:51.778 03:35:29 -- target/dif.sh@18 -- # local sub_id=0 00:21:51.778 03:35:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:51.778 03:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.778 03:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 bdev_null0 00:21:51.778 03:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.778 03:35:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:51.778 03:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.778 03:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 03:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.778 03:35:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:51.778 03:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.778 03:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 03:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.778 03:35:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:51.778 03:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.778 03:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 [2024-04-19 03:35:29.141186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.778 03:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.778 03:35:29 -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:51.778 03:35:29 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:51.778 03:35:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:51.778 03:35:29 -- nvmf/common.sh@521 -- # config=() 00:21:51.778 03:35:29 -- nvmf/common.sh@521 -- # local subsystem config 00:21:51.778 03:35:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:51.778 03:35:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:51.778 03:35:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:51.778 { 00:21:51.778 "params": { 00:21:51.778 "name": "Nvme$subsystem", 00:21:51.778 "trtype": "$TEST_TRANSPORT", 00:21:51.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.778 "adrfam": "ipv4", 00:21:51.778 "trsvcid": "$NVMF_PORT", 00:21:51.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.778 "hdgst": ${hdgst:-false}, 00:21:51.778 "ddgst": ${ddgst:-false} 00:21:51.778 }, 00:21:51.778 "method": "bdev_nvme_attach_controller" 00:21:51.778 } 00:21:51.778 EOF 00:21:51.778 )") 00:21:51.778 03:35:29 -- target/dif.sh@82 -- # gen_fio_conf 00:21:51.778 03:35:29 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:51.778 03:35:29 -- target/dif.sh@54 -- # local file 00:21:51.778 03:35:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:51.778 03:35:29 -- target/dif.sh@56 -- # cat 00:21:51.778 03:35:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:51.778 03:35:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:51.778 03:35:29 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:51.778 03:35:29 -- common/autotest_common.sh@1327 -- # shift 00:21:51.778 03:35:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:51.778 03:35:29 -- nvmf/common.sh@543 -- # cat 00:21:51.778 03:35:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.778 03:35:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:51.778 03:35:29 -- target/dif.sh@72 -- # (( file <= files )) 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:51.778 03:35:29 -- nvmf/common.sh@545 -- # jq . 00:21:51.778 03:35:29 -- nvmf/common.sh@546 -- # IFS=, 00:21:51.778 03:35:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:51.778 "params": { 00:21:51.778 "name": "Nvme0", 00:21:51.778 "trtype": "tcp", 00:21:51.778 "traddr": "10.0.0.2", 00:21:51.778 "adrfam": "ipv4", 00:21:51.778 "trsvcid": "4420", 00:21:51.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:51.778 "hdgst": false, 00:21:51.778 "ddgst": false 00:21:51.778 }, 00:21:51.778 "method": "bdev_nvme_attach_controller" 00:21:51.778 }' 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:51.778 03:35:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:51.778 03:35:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:51.778 03:35:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:51.778 03:35:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:51.778 03:35:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:51.778 03:35:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:52.037 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:52.037 ... 00:21:52.037 fio-3.35 00:21:52.037 Starting 3 threads 00:21:52.037 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.594 00:21:58.594 filename0: (groupid=0, jobs=1): err= 0: pid=341635: Fri Apr 19 03:35:34 2024 00:21:58.594 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(123MiB/5045msec) 00:21:58.594 slat (nsec): min=7424, max=40132, avg=12334.04, stdev=3093.29 00:21:58.594 clat (usec): min=5456, max=58767, avg=15365.88, stdev=13227.00 00:21:58.594 lat (usec): min=5468, max=58780, avg=15378.21, stdev=13226.97 00:21:58.594 clat percentiles (usec): 00:21:58.594 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 7832], 20.00th=[ 8848], 00:21:58.594 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11338], 60.00th=[12125], 00:21:58.594 | 70.00th=[12911], 80.00th=[13960], 90.00th=[49546], 95.00th=[52167], 00:21:58.594 | 99.00th=[54789], 99.50th=[56361], 99.90th=[58983], 99.95th=[58983], 00:21:58.594 | 99.99th=[58983] 00:21:58.594 bw ( KiB/s): min=19200, max=34816, per=33.05%, avg=25062.40, stdev=5318.74, samples=10 00:21:58.594 iops : min= 150, max= 272, avg=195.80, stdev=41.55, samples=10 00:21:58.594 lat (msec) : 10=38.84%, 20=49.95%, 50=1.43%, 100=9.79% 00:21:58.594 cpu : usr=90.25%, sys=9.28%, ctx=8, majf=0, minf=53 00:21:58.594 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.594 issued rwts: total=981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.594 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:58.594 filename0: (groupid=0, jobs=1): err= 0: pid=341636: Fri Apr 19 03:35:34 2024 00:21:58.594 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(121MiB/5046msec) 00:21:58.594 slat (nsec): min=7391, max=64336, avg=12337.07, stdev=3824.82 00:21:58.594 clat (usec): min=5763, max=58590, avg=15541.54, stdev=12976.01 00:21:58.594 lat (usec): min=5774, max=58602, avg=15553.88, stdev=12975.89 00:21:58.594 clat percentiles (usec): 00:21:58.594 | 1.00th=[ 6521], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8979], 00:21:58.594 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11600], 60.00th=[12518], 00:21:58.594 | 70.00th=[13435], 80.00th=[14746], 90.00th=[50070], 95.00th=[52691], 00:21:58.594 | 99.00th=[55313], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:21:58.594 | 99.99th=[58459] 00:21:58.594 bw ( KiB/s): min=17920, max=29184, per=32.66%, avg=24760.10, stdev=3383.19, samples=10 00:21:58.594 iops : min= 140, max= 228, avg=193.40, stdev=26.43, samples=10 00:21:58.594 lat (msec) : 10=37.42%, 20=51.86%, 50=0.82%, 100=9.90% 00:21:58.594 cpu : usr=90.84%, sys=8.72%, ctx=13, majf=0, minf=116 00:21:58.594 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.594 issued rwts: total=970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.594 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:58.594 filename0: (groupid=0, jobs=1): err= 0: pid=341637: Fri Apr 19 03:35:34 2024 00:21:58.594 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(130MiB/5024msec) 00:21:58.594 slat (nsec): min=6612, max=45387, avg=12472.62, stdev=3397.90 00:21:58.594 clat (usec): min=6405, max=59368, avg=14499.74, stdev=10121.56 00:21:58.594 lat (usec): min=6417, max=59381, avg=14512.21, stdev=10121.63 00:21:58.594 clat percentiles (usec): 00:21:58.594 | 1.00th=[ 6915], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 9634], 00:21:58.594 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11994], 60.00th=[12911], 00:21:58.594 | 70.00th=[13960], 80.00th=[15270], 90.00th=[17695], 95.00th=[51119], 00:21:58.594 | 99.00th=[54789], 99.50th=[55313], 99.90th=[58459], 99.95th=[59507], 00:21:58.594 | 99.99th=[59507] 00:21:58.594 bw ( KiB/s): min=21248, max=30208, per=34.95%, avg=26496.00, stdev=3003.08, samples=10 00:21:58.594 iops : min= 166, max= 236, avg=207.00, stdev=23.46, samples=10 00:21:58.594 lat (msec) : 10=23.31%, 20=70.62%, 50=0.48%, 100=5.59% 00:21:58.594 cpu : usr=89.67%, sys=9.85%, ctx=15, majf=0, minf=125 00:21:58.594 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.594 issued rwts: total=1038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.594 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:58.594 00:21:58.594 Run status group 0 (all jobs): 00:21:58.594 READ: bw=74.0MiB/s (77.6MB/s), 24.0MiB/s-25.8MiB/s (25.2MB/s-27.1MB/s), io=374MiB (392MB), run=5024-5046msec 00:21:58.594 03:35:35 -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:58.594 03:35:35 -- target/dif.sh@43 -- # local sub 00:21:58.594 03:35:35 -- target/dif.sh@45 -- # for sub in "$@" 00:21:58.594 03:35:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:58.594 03:35:35 -- target/dif.sh@36 -- # local sub_id=0 00:21:58.594 03:35:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:58.594 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.594 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.594 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.594 03:35:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:58.594 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.594 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.594 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.594 03:35:35 -- target/dif.sh@109 -- # NULL_DIF=2 00:21:58.594 03:35:35 -- target/dif.sh@109 -- # bs=4k 00:21:58.594 03:35:35 -- target/dif.sh@109 -- # numjobs=8 00:21:58.594 03:35:35 -- target/dif.sh@109 -- # iodepth=16 00:21:58.594 03:35:35 -- target/dif.sh@109 -- # runtime= 00:21:58.594 03:35:35 -- target/dif.sh@109 -- # files=2 00:21:58.594 03:35:35 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:58.594 03:35:35 -- target/dif.sh@28 -- # local sub 00:21:58.594 03:35:35 -- target/dif.sh@30 -- # for sub in "$@" 00:21:58.594 03:35:35 -- target/dif.sh@31 -- # create_subsystem 0 00:21:58.594 03:35:35 -- target/dif.sh@18 -- # local sub_id=0 00:21:58.594 03:35:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:58.594 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.594 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 bdev_null0 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 [2024-04-19 03:35:35.326050] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@30 -- # for sub in "$@" 00:21:58.595 03:35:35 -- target/dif.sh@31 -- # create_subsystem 1 00:21:58.595 03:35:35 -- target/dif.sh@18 -- # local sub_id=1 00:21:58.595 03:35:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 bdev_null1 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@30 -- # for sub in "$@" 00:21:58.595 03:35:35 -- target/dif.sh@31 -- # create_subsystem 2 00:21:58.595 03:35:35 -- target/dif.sh@18 -- # local sub_id=2 00:21:58.595 03:35:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 bdev_null2 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:58.595 03:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.595 03:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.595 03:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.595 03:35:35 -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:58.595 03:35:35 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:58.595 03:35:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:58.595 03:35:35 -- nvmf/common.sh@521 -- # config=() 00:21:58.595 03:35:35 -- nvmf/common.sh@521 -- # local subsystem config 00:21:58.595 03:35:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:58.595 03:35:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:58.595 03:35:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:58.595 { 00:21:58.595 "params": { 00:21:58.595 "name": "Nvme$subsystem", 00:21:58.595 "trtype": "$TEST_TRANSPORT", 00:21:58.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.595 "adrfam": "ipv4", 00:21:58.595 "trsvcid": "$NVMF_PORT", 00:21:58.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.595 "hdgst": ${hdgst:-false}, 00:21:58.595 "ddgst": ${ddgst:-false} 00:21:58.595 }, 00:21:58.595 "method": "bdev_nvme_attach_controller" 00:21:58.595 } 00:21:58.595 EOF 00:21:58.595 )") 00:21:58.595 03:35:35 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:58.595 03:35:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:58.595 03:35:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.595 03:35:35 -- target/dif.sh@82 -- # gen_fio_conf 00:21:58.595 03:35:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:58.595 03:35:35 -- target/dif.sh@54 -- # local file 00:21:58.595 03:35:35 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:58.595 03:35:35 -- common/autotest_common.sh@1327 -- # shift 00:21:58.595 03:35:35 -- target/dif.sh@56 -- # cat 00:21:58.595 03:35:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:58.595 03:35:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.595 03:35:35 -- nvmf/common.sh@543 -- # cat 00:21:58.595 03:35:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:58.595 03:35:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:58.595 03:35:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:58.595 03:35:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:58.595 03:35:35 -- target/dif.sh@72 -- # (( file <= files )) 00:21:58.595 03:35:35 -- target/dif.sh@73 -- # cat 00:21:58.595 03:35:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:58.595 03:35:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:58.595 { 00:21:58.595 "params": { 00:21:58.595 "name": "Nvme$subsystem", 00:21:58.595 "trtype": "$TEST_TRANSPORT", 00:21:58.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.595 "adrfam": "ipv4", 00:21:58.595 "trsvcid": "$NVMF_PORT", 00:21:58.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.595 "hdgst": ${hdgst:-false}, 00:21:58.595 "ddgst": ${ddgst:-false} 00:21:58.595 }, 00:21:58.595 "method": "bdev_nvme_attach_controller" 00:21:58.595 } 00:21:58.595 EOF 00:21:58.595 )") 00:21:58.595 03:35:35 -- nvmf/common.sh@543 -- # cat 00:21:58.595 03:35:35 -- target/dif.sh@72 -- # (( file++ )) 00:21:58.595 03:35:35 -- target/dif.sh@72 -- # (( file <= files )) 00:21:58.595 03:35:35 -- target/dif.sh@73 -- # cat 00:21:58.595 03:35:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:58.595 03:35:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:58.595 { 00:21:58.595 "params": { 00:21:58.595 "name": "Nvme$subsystem", 00:21:58.595 "trtype": "$TEST_TRANSPORT", 00:21:58.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.595 "adrfam": "ipv4", 00:21:58.595 "trsvcid": "$NVMF_PORT", 00:21:58.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.595 "hdgst": ${hdgst:-false}, 00:21:58.595 "ddgst": ${ddgst:-false} 00:21:58.595 }, 00:21:58.595 "method": "bdev_nvme_attach_controller" 00:21:58.595 } 00:21:58.595 EOF 00:21:58.595 )") 00:21:58.595 03:35:35 -- target/dif.sh@72 -- # (( file++ )) 00:21:58.595 03:35:35 -- target/dif.sh@72 -- # (( file <= files )) 00:21:58.595 03:35:35 -- nvmf/common.sh@543 -- # cat 00:21:58.595 03:35:35 -- nvmf/common.sh@545 -- # jq . 00:21:58.595 03:35:35 -- nvmf/common.sh@546 -- # IFS=, 00:21:58.595 03:35:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:58.595 "params": { 00:21:58.595 "name": "Nvme0", 00:21:58.595 "trtype": "tcp", 00:21:58.595 "traddr": "10.0.0.2", 00:21:58.595 "adrfam": "ipv4", 00:21:58.595 "trsvcid": "4420", 00:21:58.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:58.595 "hdgst": false, 00:21:58.595 "ddgst": false 00:21:58.595 }, 00:21:58.595 "method": "bdev_nvme_attach_controller" 00:21:58.595 },{ 00:21:58.595 "params": { 00:21:58.596 "name": "Nvme1", 00:21:58.596 "trtype": "tcp", 00:21:58.596 "traddr": "10.0.0.2", 00:21:58.596 "adrfam": "ipv4", 00:21:58.596 "trsvcid": "4420", 00:21:58.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.596 "hdgst": false, 00:21:58.596 "ddgst": false 00:21:58.596 }, 00:21:58.596 "method": "bdev_nvme_attach_controller" 00:21:58.596 },{ 00:21:58.596 "params": { 00:21:58.596 "name": "Nvme2", 00:21:58.596 "trtype": "tcp", 00:21:58.596 "traddr": "10.0.0.2", 00:21:58.596 "adrfam": "ipv4", 00:21:58.596 "trsvcid": "4420", 00:21:58.596 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:58.596 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:58.596 "hdgst": false, 00:21:58.596 "ddgst": false 00:21:58.596 }, 00:21:58.596 "method": "bdev_nvme_attach_controller" 00:21:58.596 }' 00:21:58.596 03:35:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:58.596 03:35:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:58.596 03:35:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.596 03:35:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:58.596 03:35:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:58.596 03:35:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:58.596 03:35:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:58.596 03:35:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:58.596 03:35:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:58.596 03:35:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:58.596 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:58.596 ... 00:21:58.596 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:58.596 ... 00:21:58.596 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:58.596 ... 00:21:58.596 fio-3.35 00:21:58.596 Starting 24 threads 00:21:58.596 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.800 00:22:10.800 filename0: (groupid=0, jobs=1): err= 0: pid=342498: Fri Apr 19 03:35:46 2024 00:22:10.800 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10002msec) 00:22:10.800 slat (usec): min=7, max=111, avg=30.38, stdev=13.60 00:22:10.800 clat (usec): min=31286, max=46695, avg=36218.78, stdev=3969.81 00:22:10.800 lat (usec): min=31316, max=46717, avg=36249.16, stdev=3966.81 00:22:10.800 clat percentiles (usec): 00:22:10.800 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.800 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.800 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.800 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.800 | 99.99th=[46924] 00:22:10.800 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.800 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.800 lat (msec) : 50=100.00% 00:22:10.800 cpu : usr=97.42%, sys=1.89%, ctx=113, majf=0, minf=48 00:22:10.800 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.800 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.800 filename0: (groupid=0, jobs=1): err= 0: pid=342499: Fri Apr 19 03:35:46 2024 00:22:10.800 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10003msec) 00:22:10.800 slat (usec): min=10, max=335, avg=37.01, stdev=12.51 00:22:10.800 clat (usec): min=16962, max=59546, avg=36166.95, stdev=4349.71 00:22:10.800 lat (usec): min=16998, max=59566, avg=36203.96, stdev=4348.65 00:22:10.800 clat percentiles (usec): 00:22:10.800 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.800 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.800 | 70.00th=[34866], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:22:10.800 | 99.00th=[45351], 99.50th=[46400], 99.90th=[59507], 99.95th=[59507], 00:22:10.800 | 99.99th=[59507] 00:22:10.800 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.84, stdev=182.07, samples=19 00:22:10.800 iops : min= 352, max= 480, avg=436.21, stdev=45.52, samples=19 00:22:10.800 lat (msec) : 20=0.36%, 50=99.22%, 100=0.41% 00:22:10.800 cpu : usr=93.58%, sys=3.35%, ctx=202, majf=0, minf=52 00:22:10.800 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.800 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.800 filename0: (groupid=0, jobs=1): err= 0: pid=342500: Fri Apr 19 03:35:46 2024 00:22:10.800 read: IOPS=440, BW=1762KiB/s (1804kB/s)(17.2MiB/10025msec) 00:22:10.800 slat (usec): min=8, max=121, avg=31.23, stdev=19.46 00:22:10.800 clat (usec): min=14946, max=46568, avg=36057.42, stdev=4250.55 00:22:10.800 lat (usec): min=14956, max=46587, avg=36088.65, stdev=4243.51 00:22:10.800 clat percentiles (usec): 00:22:10.800 | 1.00th=[28443], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:10.800 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.800 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.800 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.800 | 99.99th=[46400] 00:22:10.800 bw ( KiB/s): min= 1408, max= 1923, per=4.16%, avg=1751.74, stdev=204.99, samples=19 00:22:10.800 iops : min= 352, max= 480, avg=437.89, stdev=51.21, samples=19 00:22:10.800 lat (msec) : 20=0.36%, 50=99.64% 00:22:10.800 cpu : usr=93.88%, sys=3.58%, ctx=195, majf=0, minf=44 00:22:10.800 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.800 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.800 filename0: (groupid=0, jobs=1): err= 0: pid=342501: Fri Apr 19 03:35:46 2024 00:22:10.800 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.1MiB/10008msec) 00:22:10.800 slat (nsec): min=7295, max=69921, avg=31844.24, stdev=7705.56 00:22:10.800 clat (usec): min=16951, max=63945, avg=36248.10, stdev=4416.03 00:22:10.800 lat (usec): min=16981, max=63964, avg=36279.94, stdev=4415.61 00:22:10.800 clat percentiles (usec): 00:22:10.800 | 1.00th=[31589], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.800 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.800 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.800 | 99.00th=[45351], 99.50th=[46400], 99.90th=[63701], 99.95th=[63701], 00:22:10.800 | 99.99th=[63701] 00:22:10.800 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1747.35, stdev=186.93, samples=20 00:22:10.800 iops : min= 352, max= 480, avg=436.80, stdev=46.75, samples=20 00:22:10.800 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.800 cpu : usr=98.04%, sys=1.51%, ctx=18, majf=0, minf=45 00:22:10.800 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:22:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.800 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.800 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.800 filename0: (groupid=0, jobs=1): err= 0: pid=342502: Fri Apr 19 03:35:46 2024 00:22:10.800 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10005msec) 00:22:10.800 slat (usec): min=8, max=162, avg=35.62, stdev=18.36 00:22:10.800 clat (usec): min=23334, max=49845, avg=36213.03, stdev=4073.82 00:22:10.800 lat (usec): min=23343, max=49859, avg=36248.65, stdev=4068.91 00:22:10.800 clat percentiles (usec): 00:22:10.800 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:10.800 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.800 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.800 | 99.00th=[44303], 99.50th=[45876], 99.90th=[49546], 99.95th=[50070], 00:22:10.800 | 99.99th=[50070] 00:22:10.800 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.800 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.800 lat (msec) : 50=100.00% 00:22:10.800 cpu : usr=97.27%, sys=2.18%, ctx=74, majf=0, minf=44 00:22:10.801 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename0: (groupid=0, jobs=1): err= 0: pid=342503: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=440, BW=1764KiB/s (1806kB/s)(17.3MiB/10028msec) 00:22:10.801 slat (usec): min=5, max=142, avg=27.86, stdev=22.21 00:22:10.801 clat (usec): min=14571, max=46549, avg=36038.25, stdev=4321.53 00:22:10.801 lat (usec): min=14615, max=46566, avg=36066.11, stdev=4315.59 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[23725], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.801 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.801 | 99.99th=[46400] 00:22:10.801 bw ( KiB/s): min= 1408, max= 1968, per=4.19%, avg=1762.40, stdev=205.18, samples=20 00:22:10.801 iops : min= 352, max= 492, avg=440.60, stdev=51.30, samples=20 00:22:10.801 lat (msec) : 20=0.54%, 50=99.46% 00:22:10.801 cpu : usr=97.04%, sys=2.04%, ctx=53, majf=0, minf=39 00:22:10.801 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename0: (groupid=0, jobs=1): err= 0: pid=342504: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=438, BW=1754KiB/s (1796kB/s)(17.1MiB/10010msec) 00:22:10.801 slat (usec): min=12, max=136, avg=41.47, stdev=15.42 00:22:10.801 clat (usec): min=16448, max=65973, avg=36109.73, stdev=4652.57 00:22:10.801 lat (usec): min=16495, max=66001, avg=36151.20, stdev=4649.61 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[31327], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.801 | 99.00th=[45876], 99.50th=[53216], 99.90th=[65799], 99.95th=[65799], 00:22:10.801 | 99.99th=[65799] 00:22:10.801 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1749.20, stdev=185.80, samples=20 00:22:10.801 iops : min= 352, max= 480, avg=437.30, stdev=46.45, samples=20 00:22:10.801 lat (msec) : 20=0.68%, 50=98.77%, 100=0.55% 00:22:10.801 cpu : usr=97.94%, sys=1.59%, ctx=63, majf=0, minf=41 00:22:10.801 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename0: (groupid=0, jobs=1): err= 0: pid=342505: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10002msec) 00:22:10.801 slat (usec): min=8, max=148, avg=33.84, stdev=16.64 00:22:10.801 clat (usec): min=29887, max=46653, avg=36201.93, stdev=3967.30 00:22:10.801 lat (usec): min=29913, max=46679, avg=36235.77, stdev=3964.06 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.801 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.801 | 99.99th=[46400] 00:22:10.801 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.801 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.801 lat (msec) : 50=100.00% 00:22:10.801 cpu : usr=97.57%, sys=1.83%, ctx=93, majf=0, minf=51 00:22:10.801 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename1: (groupid=0, jobs=1): err= 0: pid=342506: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.1MiB/10009msec) 00:22:10.801 slat (usec): min=14, max=141, avg=43.92, stdev=19.56 00:22:10.801 clat (usec): min=16844, max=65106, avg=36100.67, stdev=4482.86 00:22:10.801 lat (usec): min=16887, max=65138, avg=36144.59, stdev=4479.30 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:22:10.801 | 99.00th=[45351], 99.50th=[46400], 99.90th=[65274], 99.95th=[65274], 00:22:10.801 | 99.99th=[65274] 00:22:10.801 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1747.20, stdev=186.99, samples=20 00:22:10.801 iops : min= 352, max= 480, avg=436.80, stdev=46.75, samples=20 00:22:10.801 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.801 cpu : usr=96.63%, sys=2.02%, ctx=51, majf=0, minf=40 00:22:10.801 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename1: (groupid=0, jobs=1): err= 0: pid=342507: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10003msec) 00:22:10.801 slat (usec): min=11, max=141, avg=43.30, stdev=20.08 00:22:10.801 clat (usec): min=16976, max=58994, avg=36072.38, stdev=4342.66 00:22:10.801 lat (usec): min=17011, max=59033, avg=36115.69, stdev=4339.98 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.801 | 70.00th=[34341], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:22:10.801 | 99.00th=[45351], 99.50th=[46400], 99.90th=[58983], 99.95th=[58983], 00:22:10.801 | 99.99th=[58983] 00:22:10.801 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1738.26, stdev=192.25, samples=19 00:22:10.801 iops : min= 352, max= 480, avg=434.53, stdev=48.08, samples=19 00:22:10.801 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.801 cpu : usr=98.04%, sys=1.52%, ctx=15, majf=0, minf=39 00:22:10.801 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename1: (groupid=0, jobs=1): err= 0: pid=342508: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10004msec) 00:22:10.801 slat (nsec): min=8558, max=98447, avg=32095.97, stdev=13554.30 00:22:10.801 clat (usec): min=27285, max=49053, avg=36250.08, stdev=4032.24 00:22:10.801 lat (usec): min=27337, max=49086, avg=36282.18, stdev=4028.77 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.801 | 99.00th=[46400], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:22:10.801 | 99.99th=[49021] 00:22:10.801 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.801 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.801 lat (msec) : 50=100.00% 00:22:10.801 cpu : usr=96.69%, sys=2.35%, ctx=220, majf=0, minf=38 00:22:10.801 IO depths : 1=1.9%, 2=7.5%, 4=22.5%, 8=57.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename1: (groupid=0, jobs=1): err= 0: pid=342509: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.5MiB/10006msec) 00:22:10.801 slat (usec): min=6, max=234, avg=27.84, stdev=18.10 00:22:10.801 clat (usec): min=5405, max=49666, avg=35418.71, stdev=5474.32 00:22:10.801 lat (usec): min=5414, max=49675, avg=35446.55, stdev=5480.51 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[12911], 5.00th=[31327], 10.00th=[33162], 20.00th=[33817], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:22:10.801 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[48497], 00:22:10.801 | 99.99th=[49546] 00:22:10.801 bw ( KiB/s): min= 1408, max= 2368, per=4.23%, avg=1781.89, stdev=249.03, samples=19 00:22:10.801 iops : min= 352, max= 592, avg=445.47, stdev=62.26, samples=19 00:22:10.801 lat (msec) : 10=0.49%, 20=2.01%, 50=97.50% 00:22:10.801 cpu : usr=93.95%, sys=3.26%, ctx=261, majf=0, minf=138 00:22:10.801 IO depths : 1=5.7%, 2=11.7%, 4=24.0%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:22:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.801 issued rwts: total=4488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.801 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.801 filename1: (groupid=0, jobs=1): err= 0: pid=342510: Fri Apr 19 03:35:46 2024 00:22:10.801 read: IOPS=440, BW=1762KiB/s (1804kB/s)(17.2MiB/10027msec) 00:22:10.801 slat (usec): min=6, max=176, avg=39.42, stdev=17.35 00:22:10.801 clat (usec): min=16180, max=46362, avg=35967.49, stdev=4183.28 00:22:10.801 lat (usec): min=16186, max=46399, avg=36006.91, stdev=4184.74 00:22:10.801 clat percentiles (usec): 00:22:10.801 | 1.00th=[25035], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:10.801 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.801 | 70.00th=[34866], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:22:10.802 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:22:10.802 | 99.99th=[46400] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1751.37, stdev=204.68, samples=19 00:22:10.802 iops : min= 352, max= 480, avg=437.84, stdev=51.17, samples=19 00:22:10.802 lat (msec) : 20=0.36%, 50=99.64% 00:22:10.802 cpu : usr=95.30%, sys=2.58%, ctx=231, majf=0, minf=45 00:22:10.802 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename1: (groupid=0, jobs=1): err= 0: pid=342511: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=438, BW=1754KiB/s (1796kB/s)(17.1MiB/10003msec) 00:22:10.802 slat (usec): min=8, max=115, avg=32.65, stdev=10.72 00:22:10.802 clat (usec): min=17020, max=58959, avg=36192.17, stdev=4425.81 00:22:10.802 lat (usec): min=17056, max=58999, avg=36224.81, stdev=4425.48 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[31065], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[45351], 99.50th=[46400], 99.90th=[58983], 99.95th=[58983], 00:22:10.802 | 99.99th=[58983] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1936, per=4.15%, avg=1745.84, stdev=182.89, samples=19 00:22:10.802 iops : min= 352, max= 484, avg=436.42, stdev=45.74, samples=19 00:22:10.802 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.802 cpu : usr=98.16%, sys=1.44%, ctx=17, majf=0, minf=47 00:22:10.802 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename1: (groupid=0, jobs=1): err= 0: pid=342512: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=438, BW=1754KiB/s (1796kB/s)(17.1MiB/10013msec) 00:22:10.802 slat (usec): min=9, max=152, avg=40.78, stdev=18.55 00:22:10.802 clat (usec): min=14845, max=68945, avg=36137.96, stdev=4607.79 00:22:10.802 lat (usec): min=14869, max=68977, avg=36178.73, stdev=4602.43 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[28443], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[45351], 99.50th=[46400], 99.90th=[65274], 99.95th=[65274], 00:22:10.802 | 99.99th=[68682] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1956, per=4.16%, avg=1751.40, stdev=188.08, samples=20 00:22:10.802 iops : min= 352, max= 489, avg=437.85, stdev=47.02, samples=20 00:22:10.802 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.802 cpu : usr=98.47%, sys=1.13%, ctx=16, majf=0, minf=47 00:22:10.802 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename1: (groupid=0, jobs=1): err= 0: pid=342513: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10022msec) 00:22:10.802 slat (usec): min=7, max=147, avg=32.48, stdev=18.01 00:22:10.802 clat (usec): min=18105, max=47514, avg=36130.45, stdev=4121.67 00:22:10.802 lat (usec): min=18153, max=47530, avg=36162.93, stdev=4118.60 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[31327], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:22:10.802 | 99.99th=[47449] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1753.26, stdev=178.27, samples=19 00:22:10.802 iops : min= 352, max= 480, avg=438.32, stdev=44.57, samples=19 00:22:10.802 lat (msec) : 20=0.14%, 50=99.86% 00:22:10.802 cpu : usr=96.30%, sys=2.24%, ctx=98, majf=0, minf=41 00:22:10.802 IO depths : 1=2.2%, 2=7.6%, 4=21.7%, 8=58.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename2: (groupid=0, jobs=1): err= 0: pid=342514: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10004msec) 00:22:10.802 slat (usec): min=8, max=132, avg=31.88, stdev=18.80 00:22:10.802 clat (usec): min=22029, max=47520, avg=36244.10, stdev=4027.81 00:22:10.802 lat (usec): min=22080, max=47576, avg=36275.98, stdev=4020.25 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[31851], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.802 | 99.99th=[47449] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.802 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.802 lat (msec) : 50=100.00% 00:22:10.802 cpu : usr=97.20%, sys=1.89%, ctx=95, majf=0, minf=57 00:22:10.802 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename2: (groupid=0, jobs=1): err= 0: pid=342515: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=437, BW=1752KiB/s (1794kB/s)(17.1MiB/10010msec) 00:22:10.802 slat (usec): min=9, max=136, avg=40.68, stdev=15.95 00:22:10.802 clat (usec): min=16910, max=66766, avg=36157.68, stdev=4510.15 00:22:10.802 lat (usec): min=16949, max=66788, avg=36198.36, stdev=4506.65 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[45351], 99.50th=[46400], 99.90th=[66847], 99.95th=[66847], 00:22:10.802 | 99.99th=[66847] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1746.80, stdev=186.61, samples=20 00:22:10.802 iops : min= 352, max= 480, avg=436.70, stdev=46.65, samples=20 00:22:10.802 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.802 cpu : usr=97.97%, sys=1.61%, ctx=18, majf=0, minf=37 00:22:10.802 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename2: (groupid=0, jobs=1): err= 0: pid=342516: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10003msec) 00:22:10.802 slat (usec): min=8, max=109, avg=36.10, stdev=11.20 00:22:10.802 clat (usec): min=17053, max=59165, avg=36179.67, stdev=4330.08 00:22:10.802 lat (usec): min=17067, max=59198, avg=36215.77, stdev=4328.15 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[45351], 99.50th=[46400], 99.90th=[58983], 99.95th=[58983], 00:22:10.802 | 99.99th=[58983] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1936, per=4.15%, avg=1745.00, stdev=182.08, samples=19 00:22:10.802 iops : min= 352, max= 484, avg=436.21, stdev=45.54, samples=19 00:22:10.802 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.802 cpu : usr=92.56%, sys=4.06%, ctx=185, majf=0, minf=41 00:22:10.802 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.802 filename2: (groupid=0, jobs=1): err= 0: pid=342517: Fri Apr 19 03:35:46 2024 00:22:10.802 read: IOPS=440, BW=1762KiB/s (1805kB/s)(17.2MiB/10023msec) 00:22:10.802 slat (usec): min=8, max=115, avg=34.30, stdev=16.00 00:22:10.802 clat (usec): min=14530, max=46684, avg=36021.79, stdev=4282.05 00:22:10.802 lat (usec): min=14539, max=46709, avg=36056.09, stdev=4277.92 00:22:10.802 clat percentiles (usec): 00:22:10.802 | 1.00th=[24511], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:10.802 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.802 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.802 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.802 | 99.99th=[46924] 00:22:10.802 bw ( KiB/s): min= 1408, max= 1923, per=4.16%, avg=1751.74, stdev=204.99, samples=19 00:22:10.802 iops : min= 352, max= 480, avg=437.89, stdev=51.21, samples=19 00:22:10.802 lat (msec) : 20=0.36%, 50=99.64% 00:22:10.802 cpu : usr=98.02%, sys=1.55%, ctx=33, majf=0, minf=53 00:22:10.802 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:22:10.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.802 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.803 filename2: (groupid=0, jobs=1): err= 0: pid=342518: Fri Apr 19 03:35:46 2024 00:22:10.803 read: IOPS=441, BW=1766KiB/s (1808kB/s)(17.2MiB/10005msec) 00:22:10.803 slat (usec): min=4, max=141, avg=31.15, stdev=22.38 00:22:10.803 clat (usec): min=11248, max=46545, avg=35980.37, stdev=4487.07 00:22:10.803 lat (usec): min=11260, max=46562, avg=36011.53, stdev=4480.92 00:22:10.803 clat percentiles (usec): 00:22:10.803 | 1.00th=[24249], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:10.803 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.803 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.803 | 99.00th=[44827], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:22:10.803 | 99.99th=[46400] 00:22:10.803 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1758.47, stdev=199.54, samples=19 00:22:10.803 iops : min= 352, max= 512, avg=439.58, stdev=49.85, samples=19 00:22:10.803 lat (msec) : 20=0.72%, 50=99.28% 00:22:10.803 cpu : usr=97.49%, sys=1.70%, ctx=40, majf=0, minf=43 00:22:10.803 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:22:10.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.803 filename2: (groupid=0, jobs=1): err= 0: pid=342519: Fri Apr 19 03:35:46 2024 00:22:10.803 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10004msec) 00:22:10.803 slat (usec): min=7, max=136, avg=42.48, stdev=19.38 00:22:10.803 clat (usec): min=31308, max=46606, avg=36120.95, stdev=4009.13 00:22:10.803 lat (usec): min=31371, max=46640, avg=36163.43, stdev=4003.26 00:22:10.803 clat percentiles (usec): 00:22:10.803 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.803 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.803 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.803 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.803 | 99.99th=[46400] 00:22:10.803 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.803 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.803 lat (msec) : 50=100.00% 00:22:10.803 cpu : usr=97.46%, sys=2.05%, ctx=22, majf=0, minf=42 00:22:10.803 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:10.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.803 filename2: (groupid=0, jobs=1): err= 0: pid=342520: Fri Apr 19 03:35:46 2024 00:22:10.803 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10004msec) 00:22:10.803 slat (usec): min=10, max=140, avg=42.21, stdev=21.07 00:22:10.803 clat (usec): min=31259, max=46572, avg=36122.57, stdev=4018.23 00:22:10.803 lat (usec): min=31292, max=46594, avg=36164.78, stdev=4011.51 00:22:10.803 clat percentiles (usec): 00:22:10.803 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:22:10.803 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:10.803 | 70.00th=[34866], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:22:10.803 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:22:10.803 | 99.99th=[46400] 00:22:10.803 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1744.21, stdev=186.39, samples=19 00:22:10.803 iops : min= 352, max= 480, avg=436.05, stdev=46.60, samples=19 00:22:10.803 lat (msec) : 50=100.00% 00:22:10.803 cpu : usr=97.21%, sys=2.29%, ctx=36, majf=0, minf=37 00:22:10.803 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:10.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.803 filename2: (groupid=0, jobs=1): err= 0: pid=342521: Fri Apr 19 03:35:46 2024 00:22:10.803 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.1MiB/10008msec) 00:22:10.803 slat (usec): min=8, max=111, avg=32.58, stdev=12.95 00:22:10.803 clat (usec): min=16995, max=63645, avg=36223.43, stdev=4411.32 00:22:10.803 lat (usec): min=17030, max=63661, avg=36256.01, stdev=4412.75 00:22:10.803 clat percentiles (usec): 00:22:10.803 | 1.00th=[31327], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:10.803 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:10.803 | 70.00th=[34866], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:22:10.803 | 99.00th=[45351], 99.50th=[46400], 99.90th=[63701], 99.95th=[63701], 00:22:10.803 | 99.99th=[63701] 00:22:10.803 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1747.35, stdev=186.42, samples=20 00:22:10.803 iops : min= 352, max= 480, avg=436.80, stdev=46.62, samples=20 00:22:10.803 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:22:10.803 cpu : usr=98.16%, sys=1.39%, ctx=23, majf=0, minf=56 00:22:10.803 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:22:10.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.803 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:10.803 00:22:10.803 Run status group 0 (all jobs): 00:22:10.803 READ: bw=41.1MiB/s (43.1MB/s), 1752KiB/s-1794KiB/s (1794kB/s-1837kB/s), io=412MiB (432MB), run=10002-10028msec 00:22:10.803 03:35:46 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:10.803 03:35:46 -- target/dif.sh@43 -- # local sub 00:22:10.803 03:35:46 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.803 03:35:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:10.803 03:35:46 -- target/dif.sh@36 -- # local sub_id=0 00:22:10.803 03:35:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.803 03:35:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:10.803 03:35:46 -- target/dif.sh@36 -- # local sub_id=1 00:22:10.803 03:35:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.803 03:35:46 -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:10.803 03:35:46 -- target/dif.sh@36 -- # local sub_id=2 00:22:10.803 03:35:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@115 -- # NULL_DIF=1 00:22:10.803 03:35:46 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:10.803 03:35:46 -- target/dif.sh@115 -- # numjobs=2 00:22:10.803 03:35:46 -- target/dif.sh@115 -- # iodepth=8 00:22:10.803 03:35:46 -- target/dif.sh@115 -- # runtime=5 00:22:10.803 03:35:46 -- target/dif.sh@115 -- # files=1 00:22:10.803 03:35:46 -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:10.803 03:35:46 -- target/dif.sh@28 -- # local sub 00:22:10.803 03:35:46 -- target/dif.sh@30 -- # for sub in "$@" 00:22:10.803 03:35:46 -- target/dif.sh@31 -- # create_subsystem 0 00:22:10.803 03:35:46 -- target/dif.sh@18 -- # local sub_id=0 00:22:10.803 03:35:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 bdev_null0 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 [2024-04-19 03:35:46.964815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@30 -- # for sub in "$@" 00:22:10.803 03:35:46 -- target/dif.sh@31 -- # create_subsystem 1 00:22:10.803 03:35:46 -- target/dif.sh@18 -- # local sub_id=1 00:22:10.803 03:35:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.803 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.803 bdev_null1 00:22:10.803 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.803 03:35:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:10.803 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.804 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.804 03:35:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:10.804 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.804 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.804 03:35:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.804 03:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.804 03:35:46 -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 03:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.804 03:35:47 -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:10.804 03:35:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.804 03:35:47 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:10.804 03:35:47 -- target/dif.sh@82 -- # gen_fio_conf 00:22:10.804 03:35:47 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.804 03:35:47 -- target/dif.sh@54 -- # local file 00:22:10.804 03:35:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:10.804 03:35:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:10.804 03:35:47 -- target/dif.sh@56 -- # cat 00:22:10.804 03:35:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.804 03:35:47 -- nvmf/common.sh@521 -- # config=() 00:22:10.804 03:35:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:10.804 03:35:47 -- nvmf/common.sh@521 -- # local subsystem config 00:22:10.804 03:35:47 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.804 03:35:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:10.804 03:35:47 -- common/autotest_common.sh@1327 -- # shift 00:22:10.804 03:35:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:10.804 03:35:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:10.804 { 00:22:10.804 "params": { 00:22:10.804 "name": "Nvme$subsystem", 00:22:10.804 "trtype": "$TEST_TRANSPORT", 00:22:10.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.804 "adrfam": "ipv4", 00:22:10.804 "trsvcid": "$NVMF_PORT", 00:22:10.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.804 "hdgst": ${hdgst:-false}, 00:22:10.804 "ddgst": ${ddgst:-false} 00:22:10.804 }, 00:22:10.804 "method": "bdev_nvme_attach_controller" 00:22:10.804 } 00:22:10.804 EOF 00:22:10.804 )") 00:22:10.804 03:35:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.804 03:35:47 -- nvmf/common.sh@543 -- # cat 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.804 03:35:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:10.804 03:35:47 -- target/dif.sh@72 -- # (( file <= files )) 00:22:10.804 03:35:47 -- target/dif.sh@73 -- # cat 00:22:10.804 03:35:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:10.804 03:35:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:10.804 { 00:22:10.804 "params": { 00:22:10.804 "name": "Nvme$subsystem", 00:22:10.804 "trtype": "$TEST_TRANSPORT", 00:22:10.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.804 "adrfam": "ipv4", 00:22:10.804 "trsvcid": "$NVMF_PORT", 00:22:10.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.804 "hdgst": ${hdgst:-false}, 00:22:10.804 "ddgst": ${ddgst:-false} 00:22:10.804 }, 00:22:10.804 "method": "bdev_nvme_attach_controller" 00:22:10.804 } 00:22:10.804 EOF 00:22:10.804 )") 00:22:10.804 03:35:47 -- target/dif.sh@72 -- # (( file++ )) 00:22:10.804 03:35:47 -- target/dif.sh@72 -- # (( file <= files )) 00:22:10.804 03:35:47 -- nvmf/common.sh@543 -- # cat 00:22:10.804 03:35:47 -- nvmf/common.sh@545 -- # jq . 00:22:10.804 03:35:47 -- nvmf/common.sh@546 -- # IFS=, 00:22:10.804 03:35:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:10.804 "params": { 00:22:10.804 "name": "Nvme0", 00:22:10.804 "trtype": "tcp", 00:22:10.804 "traddr": "10.0.0.2", 00:22:10.804 "adrfam": "ipv4", 00:22:10.804 "trsvcid": "4420", 00:22:10.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.804 "hdgst": false, 00:22:10.804 "ddgst": false 00:22:10.804 }, 00:22:10.804 "method": "bdev_nvme_attach_controller" 00:22:10.804 },{ 00:22:10.804 "params": { 00:22:10.804 "name": "Nvme1", 00:22:10.804 "trtype": "tcp", 00:22:10.804 "traddr": "10.0.0.2", 00:22:10.804 "adrfam": "ipv4", 00:22:10.804 "trsvcid": "4420", 00:22:10.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.804 "hdgst": false, 00:22:10.804 "ddgst": false 00:22:10.804 }, 00:22:10.804 "method": "bdev_nvme_attach_controller" 00:22:10.804 }' 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.804 03:35:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.804 03:35:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.804 03:35:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.804 03:35:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.804 03:35:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:10.804 03:35:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.804 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:10.804 ... 00:22:10.804 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:10.804 ... 00:22:10.804 fio-3.35 00:22:10.804 Starting 4 threads 00:22:10.804 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.072 00:22:16.072 filename0: (groupid=0, jobs=1): err= 0: pid=343906: Fri Apr 19 03:35:53 2024 00:22:16.072 read: IOPS=1627, BW=12.7MiB/s (13.3MB/s)(63.6MiB/5001msec) 00:22:16.072 slat (nsec): min=5084, max=48613, avg=11022.20, stdev=4443.98 00:22:16.072 clat (usec): min=975, max=9793, avg=4881.61, stdev=1031.91 00:22:16.072 lat (usec): min=987, max=9805, avg=4892.63, stdev=1031.73 00:22:16.072 clat percentiles (usec): 00:22:16.072 | 1.00th=[ 2966], 5.00th=[ 3621], 10.00th=[ 3851], 20.00th=[ 4047], 00:22:16.072 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4686], 60.00th=[ 5080], 00:22:16.072 | 70.00th=[ 5342], 80.00th=[ 5604], 90.00th=[ 6259], 95.00th=[ 6718], 00:22:16.072 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[ 9372], 00:22:16.072 | 99.99th=[ 9765] 00:22:16.072 bw ( KiB/s): min=11504, max=14573, per=24.18%, avg=12872.56, stdev=1236.88, samples=9 00:22:16.072 iops : min= 1438, max= 1821, avg=1609.00, stdev=154.50, samples=9 00:22:16.072 lat (usec) : 1000=0.01% 00:22:16.072 lat (msec) : 2=0.12%, 4=16.39%, 10=83.47% 00:22:16.072 cpu : usr=92.22%, sys=7.30%, ctx=14, majf=0, minf=9 00:22:16.072 IO depths : 1=0.1%, 2=6.0%, 4=65.0%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:16.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 issued rwts: total=8138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:16.072 filename0: (groupid=0, jobs=1): err= 0: pid=343907: Fri Apr 19 03:35:53 2024 00:22:16.072 read: IOPS=1653, BW=12.9MiB/s (13.5MB/s)(64.6MiB/5002msec) 00:22:16.072 slat (nsec): min=4934, max=48976, avg=11153.22, stdev=4303.25 00:22:16.072 clat (usec): min=925, max=9954, avg=4802.10, stdev=988.54 00:22:16.072 lat (usec): min=937, max=9967, avg=4813.25, stdev=988.40 00:22:16.072 clat percentiles (usec): 00:22:16.072 | 1.00th=[ 3064], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 4015], 00:22:16.072 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 4948], 00:22:16.072 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 6128], 95.00th=[ 6587], 00:22:16.072 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[ 8979], 99.95th=[ 9110], 00:22:16.072 | 99.99th=[ 9896] 00:22:16.072 bw ( KiB/s): min=11584, max=15470, per=24.84%, avg=13223.80, stdev=1264.18, samples=10 00:22:16.072 iops : min= 1448, max= 1933, avg=1652.90, stdev=157.88, samples=10 00:22:16.072 lat (usec) : 1000=0.01% 00:22:16.072 lat (msec) : 2=0.01%, 4=18.80%, 10=81.18% 00:22:16.072 cpu : usr=91.76%, sys=7.74%, ctx=9, majf=0, minf=0 00:22:16.072 IO depths : 1=0.1%, 2=6.2%, 4=65.3%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:16.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 issued rwts: total=8271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:16.072 filename1: (groupid=0, jobs=1): err= 0: pid=343908: Fri Apr 19 03:35:53 2024 00:22:16.072 read: IOPS=1746, BW=13.6MiB/s (14.3MB/s)(68.3MiB/5003msec) 00:22:16.072 slat (nsec): min=6113, max=52104, avg=10924.64, stdev=4258.51 00:22:16.072 clat (usec): min=1504, max=9511, avg=4544.81, stdev=942.95 00:22:16.072 lat (usec): min=1517, max=9532, avg=4555.73, stdev=942.94 00:22:16.072 clat percentiles (usec): 00:22:16.072 | 1.00th=[ 2737], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3818], 00:22:16.072 | 30.00th=[ 4047], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4686], 00:22:16.072 | 70.00th=[ 5014], 80.00th=[ 5276], 90.00th=[ 5604], 95.00th=[ 6128], 00:22:16.072 | 99.00th=[ 7504], 99.50th=[ 8291], 99.90th=[ 9241], 99.95th=[ 9241], 00:22:16.072 | 99.99th=[ 9503] 00:22:16.072 bw ( KiB/s): min=12336, max=16336, per=26.24%, avg=13971.20, stdev=1330.00, samples=10 00:22:16.072 iops : min= 1542, max= 2042, avg=1746.40, stdev=166.25, samples=10 00:22:16.072 lat (msec) : 2=0.24%, 4=27.68%, 10=72.08% 00:22:16.072 cpu : usr=91.72%, sys=7.78%, ctx=9, majf=0, minf=0 00:22:16.072 IO depths : 1=0.2%, 2=8.0%, 4=63.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:16.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 issued rwts: total=8737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:16.072 filename1: (groupid=0, jobs=1): err= 0: pid=343909: Fri Apr 19 03:35:53 2024 00:22:16.072 read: IOPS=1628, BW=12.7MiB/s (13.3MB/s)(63.7MiB/5002msec) 00:22:16.072 slat (nsec): min=5938, max=51810, avg=11297.32, stdev=4507.95 00:22:16.072 clat (usec): min=920, max=9693, avg=4878.18, stdev=1098.64 00:22:16.072 lat (usec): min=932, max=9701, avg=4889.47, stdev=1098.65 00:22:16.072 clat percentiles (usec): 00:22:16.072 | 1.00th=[ 2868], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 4015], 00:22:16.072 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4621], 60.00th=[ 5080], 00:22:16.072 | 70.00th=[ 5342], 80.00th=[ 5604], 90.00th=[ 6194], 95.00th=[ 6915], 00:22:16.072 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9372], 99.95th=[ 9503], 00:22:16.072 | 99.99th=[ 9634] 00:22:16.072 bw ( KiB/s): min=10752, max=15280, per=24.47%, avg=13028.80, stdev=1634.94, samples=10 00:22:16.072 iops : min= 1344, max= 1910, avg=1628.60, stdev=204.37, samples=10 00:22:16.072 lat (usec) : 1000=0.04% 00:22:16.072 lat (msec) : 2=0.16%, 4=19.60%, 10=80.20% 00:22:16.072 cpu : usr=92.62%, sys=6.86%, ctx=8, majf=0, minf=0 00:22:16.072 IO depths : 1=0.1%, 2=4.3%, 4=64.6%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:16.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.072 issued rwts: total=8148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:16.072 00:22:16.072 Run status group 0 (all jobs): 00:22:16.072 READ: bw=52.0MiB/s (54.5MB/s), 12.7MiB/s-13.6MiB/s (13.3MB/s-14.3MB/s), io=260MiB (273MB), run=5001-5003msec 00:22:16.072 03:35:53 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:16.072 03:35:53 -- target/dif.sh@43 -- # local sub 00:22:16.072 03:35:53 -- target/dif.sh@45 -- # for sub in "$@" 00:22:16.072 03:35:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:16.073 03:35:53 -- target/dif.sh@36 -- # local sub_id=0 00:22:16.073 03:35:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.073 03:35:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.073 03:35:53 -- target/dif.sh@45 -- # for sub in "$@" 00:22:16.073 03:35:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:16.073 03:35:53 -- target/dif.sh@36 -- # local sub_id=1 00:22:16.073 03:35:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.073 03:35:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.073 00:22:16.073 real 0m24.378s 00:22:16.073 user 4m29.164s 00:22:16.073 sys 0m8.925s 00:22:16.073 03:35:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 ************************************ 00:22:16.073 END TEST fio_dif_rand_params 00:22:16.073 ************************************ 00:22:16.073 03:35:53 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:16.073 03:35:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:16.073 03:35:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 ************************************ 00:22:16.073 START TEST fio_dif_digest 00:22:16.073 ************************************ 00:22:16.073 03:35:53 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:22:16.073 03:35:53 -- target/dif.sh@123 -- # local NULL_DIF 00:22:16.073 03:35:53 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:16.073 03:35:53 -- target/dif.sh@125 -- # local hdgst ddgst 00:22:16.073 03:35:53 -- target/dif.sh@127 -- # NULL_DIF=3 00:22:16.073 03:35:53 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:16.073 03:35:53 -- target/dif.sh@127 -- # numjobs=3 00:22:16.073 03:35:53 -- target/dif.sh@127 -- # iodepth=3 00:22:16.073 03:35:53 -- target/dif.sh@127 -- # runtime=10 00:22:16.073 03:35:53 -- target/dif.sh@128 -- # hdgst=true 00:22:16.073 03:35:53 -- target/dif.sh@128 -- # ddgst=true 00:22:16.073 03:35:53 -- target/dif.sh@130 -- # create_subsystems 0 00:22:16.073 03:35:53 -- target/dif.sh@28 -- # local sub 00:22:16.073 03:35:53 -- target/dif.sh@30 -- # for sub in "$@" 00:22:16.073 03:35:53 -- target/dif.sh@31 -- # create_subsystem 0 00:22:16.073 03:35:53 -- target/dif.sh@18 -- # local sub_id=0 00:22:16.073 03:35:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 bdev_null0 00:22:16.073 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.073 03:35:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.073 03:35:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:16.073 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.073 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.331 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.331 03:35:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:16.331 03:35:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.331 03:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.331 [2024-04-19 03:35:53.636455] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.331 03:35:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.331 03:35:53 -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:16.331 03:35:53 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:16.331 03:35:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:16.331 03:35:53 -- nvmf/common.sh@521 -- # config=() 00:22:16.331 03:35:53 -- nvmf/common.sh@521 -- # local subsystem config 00:22:16.331 03:35:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:16.331 03:35:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:16.331 03:35:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:16.331 { 00:22:16.331 "params": { 00:22:16.331 "name": "Nvme$subsystem", 00:22:16.331 "trtype": "$TEST_TRANSPORT", 00:22:16.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.331 "adrfam": "ipv4", 00:22:16.331 "trsvcid": "$NVMF_PORT", 00:22:16.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.331 "hdgst": ${hdgst:-false}, 00:22:16.331 "ddgst": ${ddgst:-false} 00:22:16.331 }, 00:22:16.331 "method": "bdev_nvme_attach_controller" 00:22:16.331 } 00:22:16.331 EOF 00:22:16.331 )") 00:22:16.331 03:35:53 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:16.331 03:35:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:16.331 03:35:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:16.331 03:35:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:16.331 03:35:53 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:16.331 03:35:53 -- target/dif.sh@82 -- # gen_fio_conf 00:22:16.331 03:35:53 -- common/autotest_common.sh@1327 -- # shift 00:22:16.331 03:35:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:16.331 03:35:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:16.331 03:35:53 -- target/dif.sh@54 -- # local file 00:22:16.331 03:35:53 -- target/dif.sh@56 -- # cat 00:22:16.331 03:35:53 -- nvmf/common.sh@543 -- # cat 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:16.331 03:35:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:16.331 03:35:53 -- target/dif.sh@72 -- # (( file <= files )) 00:22:16.331 03:35:53 -- nvmf/common.sh@545 -- # jq . 00:22:16.331 03:35:53 -- nvmf/common.sh@546 -- # IFS=, 00:22:16.331 03:35:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:16.331 "params": { 00:22:16.331 "name": "Nvme0", 00:22:16.331 "trtype": "tcp", 00:22:16.331 "traddr": "10.0.0.2", 00:22:16.331 "adrfam": "ipv4", 00:22:16.331 "trsvcid": "4420", 00:22:16.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:16.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:16.331 "hdgst": true, 00:22:16.331 "ddgst": true 00:22:16.331 }, 00:22:16.331 "method": "bdev_nvme_attach_controller" 00:22:16.331 }' 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:16.331 03:35:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:16.331 03:35:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:16.331 03:35:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:16.331 03:35:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:16.331 03:35:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:16.332 03:35:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:16.332 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:16.332 ... 00:22:16.332 fio-3.35 00:22:16.332 Starting 3 threads 00:22:16.589 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.824 00:22:28.824 filename0: (groupid=0, jobs=1): err= 0: pid=344747: Fri Apr 19 03:36:04 2024 00:22:28.824 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(253MiB/10006msec) 00:22:28.824 slat (nsec): min=7826, max=44793, avg=12831.85, stdev=2634.31 00:22:28.824 clat (usec): min=9582, max=18618, avg=14805.21, stdev=1162.43 00:22:28.824 lat (usec): min=9594, max=18630, avg=14818.04, stdev=1162.43 00:22:28.824 clat percentiles (usec): 00:22:28.824 | 1.00th=[11731], 5.00th=[12911], 10.00th=[13435], 20.00th=[13829], 00:22:28.824 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14877], 60.00th=[15139], 00:22:28.824 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:22:28.824 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:22:28.824 | 99.99th=[18744] 00:22:28.824 bw ( KiB/s): min=24832, max=27648, per=34.31%, avg=25894.40, stdev=823.38, samples=20 00:22:28.824 iops : min= 194, max= 216, avg=202.30, stdev= 6.43, samples=20 00:22:28.824 lat (msec) : 10=0.10%, 20=99.90% 00:22:28.824 cpu : usr=90.86%, sys=8.66%, ctx=27, majf=0, minf=93 00:22:28.824 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:28.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.824 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:28.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:28.824 filename0: (groupid=0, jobs=1): err= 0: pid=344748: Fri Apr 19 03:36:04 2024 00:22:28.824 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(247MiB/10046msec) 00:22:28.825 slat (nsec): min=7736, max=99133, avg=12713.48, stdev=3366.61 00:22:28.825 clat (usec): min=10766, max=53159, avg=15235.86, stdev=1634.04 00:22:28.825 lat (usec): min=10793, max=53171, avg=15248.57, stdev=1634.04 00:22:28.825 clat percentiles (usec): 00:22:28.825 | 1.00th=[12256], 5.00th=[13304], 10.00th=[13829], 20.00th=[14222], 00:22:28.825 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:22:28.825 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:22:28.825 | 99.00th=[17957], 99.50th=[18220], 99.90th=[49546], 99.95th=[53216], 00:22:28.825 | 99.99th=[53216] 00:22:28.825 bw ( KiB/s): min=24320, max=26368, per=33.43%, avg=25231.25, stdev=709.66, samples=20 00:22:28.825 iops : min= 190, max= 206, avg=197.10, stdev= 5.56, samples=20 00:22:28.825 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:22:28.825 cpu : usr=91.01%, sys=8.52%, ctx=32, majf=0, minf=196 00:22:28.825 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:28.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.825 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:28.825 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:28.825 filename0: (groupid=0, jobs=1): err= 0: pid=344749: Fri Apr 19 03:36:04 2024 00:22:28.825 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(241MiB/10046msec) 00:22:28.825 slat (nsec): min=7800, max=33969, avg=12465.79, stdev=2166.96 00:22:28.825 clat (usec): min=11035, max=57460, avg=15618.80, stdev=2318.87 00:22:28.825 lat (usec): min=11047, max=57472, avg=15631.26, stdev=2318.86 00:22:28.825 clat percentiles (usec): 00:22:28.825 | 1.00th=[12649], 5.00th=[13566], 10.00th=[13960], 20.00th=[14484], 00:22:28.825 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:22:28.825 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17171], 95.00th=[17695], 00:22:28.825 | 99.00th=[19006], 99.50th=[19530], 99.90th=[57410], 99.95th=[57410], 00:22:28.825 | 99.99th=[57410] 00:22:28.825 bw ( KiB/s): min=22528, max=26624, per=32.62%, avg=24614.40, stdev=1014.78, samples=20 00:22:28.825 iops : min= 176, max= 208, avg=192.30, stdev= 7.93, samples=20 00:22:28.825 lat (msec) : 20=99.64%, 50=0.16%, 100=0.21% 00:22:28.825 cpu : usr=90.85%, sys=8.68%, ctx=18, majf=0, minf=153 00:22:28.825 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:28.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.825 issued rwts: total=1925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:28.825 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:28.825 00:22:28.825 Run status group 0 (all jobs): 00:22:28.825 READ: bw=73.7MiB/s (77.3MB/s), 24.0MiB/s-25.3MiB/s (25.1MB/s-26.5MB/s), io=740MiB (776MB), run=10006-10046msec 00:22:28.825 03:36:04 -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:28.825 03:36:04 -- target/dif.sh@43 -- # local sub 00:22:28.825 03:36:04 -- target/dif.sh@45 -- # for sub in "$@" 00:22:28.825 03:36:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:28.825 03:36:04 -- target/dif.sh@36 -- # local sub_id=0 00:22:28.825 03:36:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:28.825 03:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.825 03:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:28.825 03:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.825 03:36:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:28.825 03:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.825 03:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:28.825 03:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.825 00:22:28.825 real 0m11.192s 00:22:28.825 user 0m28.532s 00:22:28.825 sys 0m2.861s 00:22:28.825 03:36:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:28.825 03:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:28.825 ************************************ 00:22:28.825 END TEST fio_dif_digest 00:22:28.825 ************************************ 00:22:28.825 03:36:04 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:28.825 03:36:04 -- target/dif.sh@147 -- # nvmftestfini 00:22:28.825 03:36:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:28.825 03:36:04 -- nvmf/common.sh@117 -- # sync 00:22:28.825 03:36:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.825 03:36:04 -- nvmf/common.sh@120 -- # set +e 00:22:28.825 03:36:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.825 03:36:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.825 rmmod nvme_tcp 00:22:28.825 rmmod nvme_fabrics 00:22:28.825 rmmod nvme_keyring 00:22:28.825 03:36:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.825 03:36:04 -- nvmf/common.sh@124 -- # set -e 00:22:28.825 03:36:04 -- nvmf/common.sh@125 -- # return 0 00:22:28.825 03:36:04 -- nvmf/common.sh@478 -- # '[' -n 338572 ']' 00:22:28.825 03:36:04 -- nvmf/common.sh@479 -- # killprocess 338572 00:22:28.825 03:36:04 -- common/autotest_common.sh@936 -- # '[' -z 338572 ']' 00:22:28.825 03:36:04 -- common/autotest_common.sh@940 -- # kill -0 338572 00:22:28.825 03:36:04 -- common/autotest_common.sh@941 -- # uname 00:22:28.825 03:36:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.825 03:36:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 338572 00:22:28.825 03:36:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:28.825 03:36:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:28.825 03:36:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 338572' 00:22:28.825 killing process with pid 338572 00:22:28.825 03:36:04 -- common/autotest_common.sh@955 -- # kill 338572 00:22:28.825 03:36:04 -- common/autotest_common.sh@960 -- # wait 338572 00:22:28.825 03:36:05 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:22:28.825 03:36:05 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:28.825 Waiting for block devices as requested 00:22:28.825 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:28.825 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:29.085 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:29.085 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:29.085 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:29.085 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:29.344 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:29.344 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:29.344 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:29.344 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:29.602 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:29.602 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:29.602 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:29.602 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:29.860 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:29.860 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:29.860 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:30.120 03:36:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:30.120 03:36:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:30.120 03:36:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.120 03:36:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.120 03:36:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.120 03:36:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:30.120 03:36:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.024 03:36:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.024 00:22:32.024 real 1m6.889s 00:22:32.024 user 6m25.251s 00:22:32.024 sys 0m21.431s 00:22:32.024 03:36:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:32.024 03:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:32.024 ************************************ 00:22:32.024 END TEST nvmf_dif 00:22:32.024 ************************************ 00:22:32.024 03:36:09 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:32.024 03:36:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:32.024 03:36:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:32.024 03:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:32.283 ************************************ 00:22:32.283 START TEST nvmf_abort_qd_sizes 00:22:32.283 ************************************ 00:22:32.283 03:36:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:32.283 * Looking for test storage... 00:22:32.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.283 03:36:09 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.283 03:36:09 -- nvmf/common.sh@7 -- # uname -s 00:22:32.283 03:36:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.283 03:36:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.283 03:36:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.283 03:36:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.283 03:36:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.283 03:36:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.283 03:36:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.283 03:36:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.283 03:36:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.283 03:36:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.283 03:36:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.283 03:36:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.283 03:36:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.283 03:36:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.283 03:36:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.283 03:36:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.283 03:36:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.283 03:36:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.283 03:36:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.283 03:36:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.283 03:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.283 03:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.283 03:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.283 03:36:09 -- paths/export.sh@5 -- # export PATH 00:22:32.283 03:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.283 03:36:09 -- nvmf/common.sh@47 -- # : 0 00:22:32.283 03:36:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.283 03:36:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.283 03:36:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.283 03:36:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.283 03:36:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.283 03:36:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.283 03:36:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.283 03:36:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.283 03:36:09 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:32.283 03:36:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:32.283 03:36:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.283 03:36:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:32.283 03:36:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:32.283 03:36:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:32.283 03:36:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.283 03:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:32.283 03:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.283 03:36:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:32.283 03:36:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:32.283 03:36:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.283 03:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.187 03:36:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:34.187 03:36:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.187 03:36:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.187 03:36:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.187 03:36:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.187 03:36:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.187 03:36:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.187 03:36:11 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.187 03:36:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.187 03:36:11 -- nvmf/common.sh@296 -- # e810=() 00:22:34.187 03:36:11 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.187 03:36:11 -- nvmf/common.sh@297 -- # x722=() 00:22:34.187 03:36:11 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.187 03:36:11 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.187 03:36:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.187 03:36:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.187 03:36:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.187 03:36:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.187 03:36:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.187 03:36:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.187 03:36:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:34.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:34.187 03:36:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.187 03:36:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:34.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:34.187 03:36:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.187 03:36:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.187 03:36:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.187 03:36:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:34.187 03:36:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.187 03:36:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:34.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:34.187 03:36:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.187 03:36:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.187 03:36:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.187 03:36:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:34.187 03:36:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.187 03:36:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:34.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:34.187 03:36:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.187 03:36:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:34.187 03:36:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:34.187 03:36:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:34.187 03:36:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:34.187 03:36:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.187 03:36:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.187 03:36:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.187 03:36:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.187 03:36:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.187 03:36:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.187 03:36:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.187 03:36:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.187 03:36:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.187 03:36:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.187 03:36:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.187 03:36:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.187 03:36:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.187 03:36:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.187 03:36:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.187 03:36:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.187 03:36:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.187 03:36:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.187 03:36:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.187 03:36:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:34.187 00:22:34.187 --- 10.0.0.2 ping statistics --- 00:22:34.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.187 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:34.187 03:36:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:22:34.187 00:22:34.187 --- 10.0.0.1 ping statistics --- 00:22:34.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.187 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:34.187 03:36:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.187 03:36:11 -- nvmf/common.sh@411 -- # return 0 00:22:34.187 03:36:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:34.187 03:36:11 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:35.563 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:35.563 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:35.563 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:36.497 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:36.497 03:36:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.497 03:36:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:36.497 03:36:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:36.497 03:36:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.497 03:36:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:36.497 03:36:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:36.497 03:36:14 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:36.497 03:36:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:36.497 03:36:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:36.497 03:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:36.497 03:36:14 -- nvmf/common.sh@470 -- # nvmfpid=350220 00:22:36.497 03:36:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:36.497 03:36:14 -- nvmf/common.sh@471 -- # waitforlisten 350220 00:22:36.497 03:36:14 -- common/autotest_common.sh@817 -- # '[' -z 350220 ']' 00:22:36.497 03:36:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.497 03:36:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:36.497 03:36:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.497 03:36:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:36.497 03:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:36.755 [2024-04-19 03:36:14.091745] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:22:36.755 [2024-04-19 03:36:14.091822] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.755 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.755 [2024-04-19 03:36:14.156223] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.755 [2024-04-19 03:36:14.268907] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.755 [2024-04-19 03:36:14.268961] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.755 [2024-04-19 03:36:14.268978] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.755 [2024-04-19 03:36:14.268989] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.755 [2024-04-19 03:36:14.268998] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.755 [2024-04-19 03:36:14.269077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.755 [2024-04-19 03:36:14.269153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.755 [2024-04-19 03:36:14.269222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.755 [2024-04-19 03:36:14.269225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.013 03:36:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:37.013 03:36:14 -- common/autotest_common.sh@850 -- # return 0 00:22:37.013 03:36:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:37.013 03:36:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:37.013 03:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.013 03:36:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:37.013 03:36:14 -- scripts/common.sh@309 -- # local bdf bdfs 00:22:37.013 03:36:14 -- scripts/common.sh@310 -- # local nvmes 00:22:37.013 03:36:14 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:22:37.013 03:36:14 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:22:37.013 03:36:14 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:37.013 03:36:14 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:22:37.013 03:36:14 -- scripts/common.sh@320 -- # uname -s 00:22:37.013 03:36:14 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:37.013 03:36:14 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:37.013 03:36:14 -- scripts/common.sh@325 -- # (( 1 )) 00:22:37.013 03:36:14 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:37.013 03:36:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:37.013 03:36:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.013 03:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.013 ************************************ 00:22:37.013 START TEST spdk_target_abort 00:22:37.013 ************************************ 00:22:37.013 03:36:14 -- common/autotest_common.sh@1111 -- # spdk_target 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:37.013 03:36:14 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:22:37.013 03:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.013 03:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:39.797 spdk_targetn1 00:22:39.797 03:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.797 03:36:17 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.797 03:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.797 03:36:17 -- common/autotest_common.sh@10 -- # set +x 00:22:39.797 [2024-04-19 03:36:17.336092] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.797 03:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.797 03:36:17 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:39.797 03:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.797 03:36:17 -- common/autotest_common.sh@10 -- # set +x 00:22:39.797 03:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.797 03:36:17 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:39.797 03:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.797 03:36:17 -- common/autotest_common.sh@10 -- # set +x 00:22:40.055 03:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:40.055 03:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.055 03:36:17 -- common/autotest_common.sh@10 -- # set +x 00:22:40.055 [2024-04-19 03:36:17.368332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.055 03:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:40.055 03:36:17 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:40.055 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.334 Initializing NVMe Controllers 00:22:43.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:43.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:43.334 Initialization complete. Launching workers. 00:22:43.334 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10678, failed: 0 00:22:43.334 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 9423 00:22:43.334 success 751, unsuccess 504, failed 0 00:22:43.334 03:36:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:43.334 03:36:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:43.334 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.611 [2024-04-19 03:36:23.857224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126780 is same with the state(5) to be set 00:22:46.611 Initializing NVMe Controllers 00:22:46.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:46.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:46.611 Initialization complete. Launching workers. 00:22:46.611 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8548, failed: 0 00:22:46.611 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7330 00:22:46.611 success 319, unsuccess 899, failed 0 00:22:46.611 03:36:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:46.611 03:36:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:46.611 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.892 Initializing NVMe Controllers 00:22:49.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:49.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:49.892 Initialization complete. Launching workers. 00:22:49.892 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31331, failed: 0 00:22:49.892 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2731, failed to submit 28600 00:22:49.892 success 512, unsuccess 2219, failed 0 00:22:49.892 03:36:27 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:49.892 03:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.892 03:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.892 03:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.892 03:36:27 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:49.892 03:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.892 03:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:51.299 03:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.299 03:36:28 -- target/abort_qd_sizes.sh@61 -- # killprocess 350220 00:22:51.299 03:36:28 -- common/autotest_common.sh@936 -- # '[' -z 350220 ']' 00:22:51.299 03:36:28 -- common/autotest_common.sh@940 -- # kill -0 350220 00:22:51.299 03:36:28 -- common/autotest_common.sh@941 -- # uname 00:22:51.299 03:36:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.299 03:36:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 350220 00:22:51.299 03:36:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.299 03:36:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.299 03:36:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 350220' 00:22:51.299 killing process with pid 350220 00:22:51.299 03:36:28 -- common/autotest_common.sh@955 -- # kill 350220 00:22:51.299 03:36:28 -- common/autotest_common.sh@960 -- # wait 350220 00:22:51.557 00:22:51.557 real 0m14.376s 00:22:51.557 user 0m53.759s 00:22:51.557 sys 0m2.931s 00:22:51.557 03:36:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.557 03:36:28 -- common/autotest_common.sh@10 -- # set +x 00:22:51.557 ************************************ 00:22:51.557 END TEST spdk_target_abort 00:22:51.557 ************************************ 00:22:51.557 03:36:28 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:51.557 03:36:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:51.557 03:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.557 03:36:28 -- common/autotest_common.sh@10 -- # set +x 00:22:51.557 ************************************ 00:22:51.557 START TEST kernel_target_abort 00:22:51.557 ************************************ 00:22:51.557 03:36:28 -- common/autotest_common.sh@1111 -- # kernel_target 00:22:51.557 03:36:28 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:51.557 03:36:28 -- nvmf/common.sh@717 -- # local ip 00:22:51.557 03:36:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:51.557 03:36:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:51.557 03:36:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.557 03:36:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.557 03:36:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:51.557 03:36:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.557 03:36:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:51.557 03:36:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:51.557 03:36:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:51.557 03:36:28 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:51.557 03:36:28 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:51.557 03:36:28 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:51.557 03:36:28 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:51.557 03:36:28 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:51.557 03:36:28 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:51.557 03:36:28 -- nvmf/common.sh@628 -- # local block nvme 00:22:51.557 03:36:28 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:51.557 03:36:28 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:51.557 03:36:29 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:51.557 03:36:29 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:52.934 Waiting for block devices as requested 00:22:52.934 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:52.934 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:52.934 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:52.934 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:52.934 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:52.934 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:53.192 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:53.192 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:53.192 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:53.192 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:53.450 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:53.450 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:53.450 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:53.450 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:53.709 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:53.709 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:53.709 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:53.967 03:36:31 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:53.967 03:36:31 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:53.967 03:36:31 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:53.967 03:36:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:53.967 03:36:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:53.967 03:36:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:53.967 03:36:31 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:53.967 03:36:31 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:53.967 03:36:31 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:53.967 No valid GPT data, bailing 00:22:53.967 03:36:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:53.967 03:36:31 -- scripts/common.sh@391 -- # pt= 00:22:53.967 03:36:31 -- scripts/common.sh@392 -- # return 1 00:22:53.967 03:36:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:53.967 03:36:31 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:53.967 03:36:31 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:53.967 03:36:31 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:53.967 03:36:31 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:53.967 03:36:31 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:53.967 03:36:31 -- nvmf/common.sh@656 -- # echo 1 00:22:53.967 03:36:31 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:53.967 03:36:31 -- nvmf/common.sh@658 -- # echo 1 00:22:53.967 03:36:31 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:53.967 03:36:31 -- nvmf/common.sh@661 -- # echo tcp 00:22:53.967 03:36:31 -- nvmf/common.sh@662 -- # echo 4420 00:22:53.967 03:36:31 -- nvmf/common.sh@663 -- # echo ipv4 00:22:53.967 03:36:31 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:53.967 03:36:31 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:53.967 00:22:53.967 Discovery Log Number of Records 2, Generation counter 2 00:22:53.967 =====Discovery Log Entry 0====== 00:22:53.967 trtype: tcp 00:22:53.967 adrfam: ipv4 00:22:53.967 subtype: current discovery subsystem 00:22:53.967 treq: not specified, sq flow control disable supported 00:22:53.967 portid: 1 00:22:53.967 trsvcid: 4420 00:22:53.967 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:53.967 traddr: 10.0.0.1 00:22:53.967 eflags: none 00:22:53.967 sectype: none 00:22:53.967 =====Discovery Log Entry 1====== 00:22:53.967 trtype: tcp 00:22:53.968 adrfam: ipv4 00:22:53.968 subtype: nvme subsystem 00:22:53.968 treq: not specified, sq flow control disable supported 00:22:53.968 portid: 1 00:22:53.968 trsvcid: 4420 00:22:53.968 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:53.968 traddr: 10.0.0.1 00:22:53.968 eflags: none 00:22:53.968 sectype: none 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:53.968 03:36:31 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:53.968 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.250 Initializing NVMe Controllers 00:22:57.250 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:57.250 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:57.250 Initialization complete. Launching workers. 00:22:57.250 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31166, failed: 0 00:22:57.250 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31166, failed to submit 0 00:22:57.250 success 0, unsuccess 31166, failed 0 00:22:57.250 03:36:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:57.250 03:36:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:57.250 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.531 Initializing NVMe Controllers 00:23:00.531 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:00.531 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:00.531 Initialization complete. Launching workers. 00:23:00.531 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62151, failed: 0 00:23:00.531 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15658, failed to submit 46493 00:23:00.531 success 0, unsuccess 15658, failed 0 00:23:00.531 03:36:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:00.532 03:36:37 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:00.532 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.060 Initializing NVMe Controllers 00:23:03.060 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:03.060 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:03.060 Initialization complete. Launching workers. 00:23:03.060 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61097, failed: 0 00:23:03.060 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15230, failed to submit 45867 00:23:03.060 success 0, unsuccess 15230, failed 0 00:23:03.060 03:36:40 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:03.060 03:36:40 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:03.060 03:36:40 -- nvmf/common.sh@675 -- # echo 0 00:23:03.060 03:36:40 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.060 03:36:40 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.060 03:36:40 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:03.060 03:36:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.060 03:36:40 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:03.060 03:36:40 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:03.319 03:36:40 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:04.253 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:04.253 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:04.253 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:05.189 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:05.448 00:23:05.448 real 0m13.897s 00:23:05.448 user 0m4.887s 00:23:05.448 sys 0m3.299s 00:23:05.448 03:36:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:05.448 03:36:42 -- common/autotest_common.sh@10 -- # set +x 00:23:05.448 ************************************ 00:23:05.448 END TEST kernel_target_abort 00:23:05.448 ************************************ 00:23:05.448 03:36:42 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:05.448 03:36:42 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:05.448 03:36:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:05.448 03:36:42 -- nvmf/common.sh@117 -- # sync 00:23:05.448 03:36:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.448 03:36:42 -- nvmf/common.sh@120 -- # set +e 00:23:05.448 03:36:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.448 03:36:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.448 rmmod nvme_tcp 00:23:05.448 rmmod nvme_fabrics 00:23:05.448 rmmod nvme_keyring 00:23:05.448 03:36:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.448 03:36:42 -- nvmf/common.sh@124 -- # set -e 00:23:05.448 03:36:42 -- nvmf/common.sh@125 -- # return 0 00:23:05.448 03:36:42 -- nvmf/common.sh@478 -- # '[' -n 350220 ']' 00:23:05.448 03:36:42 -- nvmf/common.sh@479 -- # killprocess 350220 00:23:05.448 03:36:42 -- common/autotest_common.sh@936 -- # '[' -z 350220 ']' 00:23:05.448 03:36:42 -- common/autotest_common.sh@940 -- # kill -0 350220 00:23:05.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (350220) - No such process 00:23:05.448 03:36:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 350220 is not found' 00:23:05.448 Process with pid 350220 is not found 00:23:05.448 03:36:42 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:05.448 03:36:42 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:06.383 Waiting for block devices as requested 00:23:06.383 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:06.643 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:06.643 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:06.643 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:06.901 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:06.901 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:06.901 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:06.901 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:06.901 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:07.158 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:07.158 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:07.158 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:07.417 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:07.417 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:07.417 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:07.417 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:07.677 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:07.677 03:36:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:07.677 03:36:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:07.677 03:36:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.677 03:36:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.677 03:36:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.677 03:36:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:07.677 03:36:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.585 03:36:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.585 00:23:09.585 real 0m37.536s 00:23:09.585 user 1m0.657s 00:23:09.585 sys 0m9.510s 00:23:09.585 03:36:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.844 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:09.844 ************************************ 00:23:09.844 END TEST nvmf_abort_qd_sizes 00:23:09.844 ************************************ 00:23:09.844 03:36:47 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:09.844 03:36:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:09.844 03:36:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:09.844 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:09.844 ************************************ 00:23:09.844 START TEST keyring_file 00:23:09.844 ************************************ 00:23:09.844 03:36:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:09.844 * Looking for test storage... 00:23:09.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:23:09.844 03:36:47 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:23:09.844 03:36:47 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.844 03:36:47 -- nvmf/common.sh@7 -- # uname -s 00:23:09.844 03:36:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.844 03:36:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.844 03:36:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.844 03:36:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.844 03:36:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.844 03:36:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.844 03:36:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.844 03:36:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.844 03:36:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.844 03:36:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.844 03:36:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.844 03:36:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.844 03:36:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.844 03:36:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.844 03:36:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.844 03:36:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.844 03:36:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.844 03:36:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.844 03:36:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.844 03:36:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.844 03:36:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.844 03:36:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.844 03:36:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.844 03:36:47 -- paths/export.sh@5 -- # export PATH 00:23:09.844 03:36:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.844 03:36:47 -- nvmf/common.sh@47 -- # : 0 00:23:09.844 03:36:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.844 03:36:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.844 03:36:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.844 03:36:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.844 03:36:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.844 03:36:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.844 03:36:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.844 03:36:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.844 03:36:47 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:09.844 03:36:47 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:09.844 03:36:47 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:09.844 03:36:47 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:09.844 03:36:47 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:09.844 03:36:47 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:09.844 03:36:47 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:09.844 03:36:47 -- keyring/common.sh@15 -- # local name key digest path 00:23:09.844 03:36:47 -- keyring/common.sh@17 -- # name=key0 00:23:09.844 03:36:47 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:09.844 03:36:47 -- keyring/common.sh@17 -- # digest=0 00:23:09.844 03:36:47 -- keyring/common.sh@18 -- # mktemp 00:23:09.844 03:36:47 -- keyring/common.sh@18 -- # path=/tmp/tmp.1h4vxRysH0 00:23:09.844 03:36:47 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:09.844 03:36:47 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:09.844 03:36:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:09.844 03:36:47 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:09.844 03:36:47 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:09.844 03:36:47 -- nvmf/common.sh@693 -- # digest=0 00:23:09.844 03:36:47 -- nvmf/common.sh@694 -- # python - 00:23:09.844 03:36:47 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1h4vxRysH0 00:23:09.844 03:36:47 -- keyring/common.sh@23 -- # echo /tmp/tmp.1h4vxRysH0 00:23:09.844 03:36:47 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1h4vxRysH0 00:23:09.844 03:36:47 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:09.844 03:36:47 -- keyring/common.sh@15 -- # local name key digest path 00:23:09.844 03:36:47 -- keyring/common.sh@17 -- # name=key1 00:23:09.844 03:36:47 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:09.844 03:36:47 -- keyring/common.sh@17 -- # digest=0 00:23:09.844 03:36:47 -- keyring/common.sh@18 -- # mktemp 00:23:09.844 03:36:47 -- keyring/common.sh@18 -- # path=/tmp/tmp.o0RGZXAcmz 00:23:09.844 03:36:47 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:09.844 03:36:47 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:09.844 03:36:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:09.844 03:36:47 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:09.844 03:36:47 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:23:09.844 03:36:47 -- nvmf/common.sh@693 -- # digest=0 00:23:09.844 03:36:47 -- nvmf/common.sh@694 -- # python - 00:23:09.844 03:36:47 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.o0RGZXAcmz 00:23:10.103 03:36:47 -- keyring/common.sh@23 -- # echo /tmp/tmp.o0RGZXAcmz 00:23:10.103 03:36:47 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.o0RGZXAcmz 00:23:10.103 03:36:47 -- keyring/file.sh@30 -- # tgtpid=356009 00:23:10.103 03:36:47 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:23:10.103 03:36:47 -- keyring/file.sh@32 -- # waitforlisten 356009 00:23:10.103 03:36:47 -- common/autotest_common.sh@817 -- # '[' -z 356009 ']' 00:23:10.103 03:36:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.103 03:36:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.103 03:36:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.103 03:36:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.103 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.103 [2024-04-19 03:36:47.449356] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:23:10.103 [2024-04-19 03:36:47.449477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356009 ] 00:23:10.103 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.103 [2024-04-19 03:36:47.507113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.103 [2024-04-19 03:36:47.626670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.361 03:36:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:10.361 03:36:47 -- common/autotest_common.sh@850 -- # return 0 00:23:10.361 03:36:47 -- keyring/file.sh@33 -- # rpc_cmd 00:23:10.361 03:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.361 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.361 [2024-04-19 03:36:47.880008] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.361 null0 00:23:10.361 [2024-04-19 03:36:47.912061] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.361 [2024-04-19 03:36:47.912578] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:10.361 [2024-04-19 03:36:47.920080] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:10.621 03:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.621 03:36:47 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:10.621 03:36:47 -- common/autotest_common.sh@638 -- # local es=0 00:23:10.621 03:36:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:10.621 03:36:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:10.621 03:36:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.621 03:36:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:10.621 03:36:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.621 03:36:47 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:10.621 03:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.621 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.621 [2024-04-19 03:36:47.928068] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:23:10.621 { 00:23:10.621 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.621 "secure_channel": false, 00:23:10.621 "listen_address": { 00:23:10.621 "trtype": "tcp", 00:23:10.621 "traddr": "127.0.0.1", 00:23:10.621 "trsvcid": "4420" 00:23:10.621 }, 00:23:10.621 "method": "nvmf_subsystem_add_listener", 00:23:10.621 "req_id": 1 00:23:10.621 } 00:23:10.621 Got JSON-RPC error response 00:23:10.621 response: 00:23:10.621 { 00:23:10.621 "code": -32602, 00:23:10.621 "message": "Invalid parameters" 00:23:10.621 } 00:23:10.621 03:36:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:10.621 03:36:47 -- common/autotest_common.sh@641 -- # es=1 00:23:10.621 03:36:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:10.621 03:36:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:10.621 03:36:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:10.621 03:36:47 -- keyring/file.sh@46 -- # bperfpid=356019 00:23:10.621 03:36:47 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:10.621 03:36:47 -- keyring/file.sh@48 -- # waitforlisten 356019 /var/tmp/bperf.sock 00:23:10.621 03:36:47 -- common/autotest_common.sh@817 -- # '[' -z 356019 ']' 00:23:10.621 03:36:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:10.621 03:36:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.621 03:36:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:10.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:10.621 03:36:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.621 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.621 [2024-04-19 03:36:47.974048] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:23:10.621 [2024-04-19 03:36:47.974111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356019 ] 00:23:10.621 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.621 [2024-04-19 03:36:48.033498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.621 [2024-04-19 03:36:48.149335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.880 03:36:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:10.880 03:36:48 -- common/autotest_common.sh@850 -- # return 0 00:23:10.880 03:36:48 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:10.880 03:36:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:11.138 03:36:48 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.o0RGZXAcmz 00:23:11.138 03:36:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.o0RGZXAcmz 00:23:11.427 03:36:48 -- keyring/file.sh@51 -- # get_key key0 00:23:11.427 03:36:48 -- keyring/file.sh@51 -- # jq -r .path 00:23:11.427 03:36:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.427 03:36:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.427 03:36:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:11.686 03:36:49 -- keyring/file.sh@51 -- # [[ /tmp/tmp.1h4vxRysH0 == \/\t\m\p\/\t\m\p\.\1\h\4\v\x\R\y\s\H\0 ]] 00:23:11.686 03:36:49 -- keyring/file.sh@52 -- # get_key key1 00:23:11.686 03:36:49 -- keyring/file.sh@52 -- # jq -r .path 00:23:11.686 03:36:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.686 03:36:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.686 03:36:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:11.686 03:36:49 -- keyring/file.sh@52 -- # [[ /tmp/tmp.o0RGZXAcmz == \/\t\m\p\/\t\m\p\.\o\0\R\G\Z\X\A\c\m\z ]] 00:23:11.944 03:36:49 -- keyring/file.sh@53 -- # get_refcnt key0 00:23:11.944 03:36:49 -- keyring/common.sh@12 -- # get_key key0 00:23:11.944 03:36:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:11.944 03:36:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.944 03:36:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.944 03:36:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:11.944 03:36:49 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:11.944 03:36:49 -- keyring/file.sh@54 -- # get_refcnt key1 00:23:11.944 03:36:49 -- keyring/common.sh@12 -- # get_key key1 00:23:11.944 03:36:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:11.944 03:36:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.944 03:36:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:11.944 03:36:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.202 03:36:49 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:12.202 03:36:49 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.202 03:36:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.460 [2024-04-19 03:36:49.960576] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.718 nvme0n1 00:23:12.718 03:36:50 -- keyring/file.sh@59 -- # get_refcnt key0 00:23:12.718 03:36:50 -- keyring/common.sh@12 -- # get_key key0 00:23:12.718 03:36:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.718 03:36:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.718 03:36:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.718 03:36:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:12.979 03:36:50 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:12.979 03:36:50 -- keyring/file.sh@60 -- # get_refcnt key1 00:23:12.979 03:36:50 -- keyring/common.sh@12 -- # get_key key1 00:23:12.979 03:36:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.979 03:36:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.979 03:36:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.979 03:36:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:12.979 03:36:50 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:12.979 03:36:50 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:13.237 Running I/O for 1 seconds... 00:23:14.170 00:23:14.170 Latency(us) 00:23:14.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.170 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:14.170 nvme0n1 : 1.03 4388.48 17.14 0.00 0.00 28783.35 6893.42 36311.80 00:23:14.170 =================================================================================================================== 00:23:14.170 Total : 4388.48 17.14 0.00 0.00 28783.35 6893.42 36311.80 00:23:14.170 0 00:23:14.170 03:36:51 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:14.170 03:36:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:14.428 03:36:51 -- keyring/file.sh@65 -- # get_refcnt key0 00:23:14.428 03:36:51 -- keyring/common.sh@12 -- # get_key key0 00:23:14.428 03:36:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:14.428 03:36:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:14.428 03:36:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:14.428 03:36:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:14.686 03:36:52 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:14.686 03:36:52 -- keyring/file.sh@66 -- # get_refcnt key1 00:23:14.686 03:36:52 -- keyring/common.sh@12 -- # get_key key1 00:23:14.686 03:36:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:14.686 03:36:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:14.686 03:36:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:14.686 03:36:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:14.944 03:36:52 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:14.944 03:36:52 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:14.944 03:36:52 -- common/autotest_common.sh@638 -- # local es=0 00:23:14.944 03:36:52 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:14.944 03:36:52 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:14.944 03:36:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:14.944 03:36:52 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:14.944 03:36:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:14.944 03:36:52 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:14.944 03:36:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:15.203 [2024-04-19 03:36:52.650578] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.203 [2024-04-19 03:36:52.651480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738fe0 (107): Transport endpoint is not connected 00:23:15.203 [2024-04-19 03:36:52.652474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738fe0 (9): Bad file descriptor 00:23:15.203 [2024-04-19 03:36:52.653472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:15.203 [2024-04-19 03:36:52.653492] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:15.203 [2024-04-19 03:36:52.653505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:15.203 request: 00:23:15.203 { 00:23:15.203 "name": "nvme0", 00:23:15.203 "trtype": "tcp", 00:23:15.203 "traddr": "127.0.0.1", 00:23:15.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.203 "adrfam": "ipv4", 00:23:15.203 "trsvcid": "4420", 00:23:15.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.203 "psk": "key1", 00:23:15.203 "method": "bdev_nvme_attach_controller", 00:23:15.203 "req_id": 1 00:23:15.203 } 00:23:15.203 Got JSON-RPC error response 00:23:15.203 response: 00:23:15.203 { 00:23:15.203 "code": -32602, 00:23:15.203 "message": "Invalid parameters" 00:23:15.203 } 00:23:15.203 03:36:52 -- common/autotest_common.sh@641 -- # es=1 00:23:15.203 03:36:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:15.203 03:36:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:15.203 03:36:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:15.203 03:36:52 -- keyring/file.sh@71 -- # get_refcnt key0 00:23:15.203 03:36:52 -- keyring/common.sh@12 -- # get_key key0 00:23:15.203 03:36:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.203 03:36:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.203 03:36:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.203 03:36:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:15.460 03:36:52 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:15.460 03:36:52 -- keyring/file.sh@72 -- # get_refcnt key1 00:23:15.460 03:36:52 -- keyring/common.sh@12 -- # get_key key1 00:23:15.460 03:36:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.460 03:36:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.460 03:36:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.460 03:36:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:15.718 03:36:53 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:15.718 03:36:53 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:15.718 03:36:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:15.975 03:36:53 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:15.975 03:36:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:16.232 03:36:53 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:16.232 03:36:53 -- keyring/file.sh@77 -- # jq length 00:23:16.232 03:36:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.490 03:36:53 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:16.490 03:36:53 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.1h4vxRysH0 00:23:16.490 03:36:53 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:16.490 03:36:53 -- common/autotest_common.sh@638 -- # local es=0 00:23:16.490 03:36:53 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:16.490 03:36:53 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:16.490 03:36:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:16.490 03:36:53 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:16.491 03:36:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:16.491 03:36:53 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:16.491 03:36:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:16.749 [2024-04-19 03:36:54.094281] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1h4vxRysH0': 0100660 00:23:16.749 [2024-04-19 03:36:54.094322] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:16.749 request: 00:23:16.749 { 00:23:16.749 "name": "key0", 00:23:16.749 "path": "/tmp/tmp.1h4vxRysH0", 00:23:16.749 "method": "keyring_file_add_key", 00:23:16.749 "req_id": 1 00:23:16.749 } 00:23:16.749 Got JSON-RPC error response 00:23:16.749 response: 00:23:16.749 { 00:23:16.749 "code": -1, 00:23:16.749 "message": "Operation not permitted" 00:23:16.749 } 00:23:16.749 03:36:54 -- common/autotest_common.sh@641 -- # es=1 00:23:16.749 03:36:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:16.749 03:36:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:16.749 03:36:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:16.749 03:36:54 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.1h4vxRysH0 00:23:16.749 03:36:54 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:16.749 03:36:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1h4vxRysH0 00:23:17.007 03:36:54 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.1h4vxRysH0 00:23:17.007 03:36:54 -- keyring/file.sh@88 -- # get_refcnt key0 00:23:17.007 03:36:54 -- keyring/common.sh@12 -- # get_key key0 00:23:17.007 03:36:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:17.007 03:36:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:17.007 03:36:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:17.007 03:36:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:17.268 03:36:54 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:17.268 03:36:54 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:17.268 03:36:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:17.268 03:36:54 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:17.268 03:36:54 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:17.268 03:36:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.268 03:36:54 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:17.268 03:36:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.268 03:36:54 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:17.268 03:36:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:17.268 [2024-04-19 03:36:54.812221] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1h4vxRysH0': No such file or directory 00:23:17.268 [2024-04-19 03:36:54.812261] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:17.268 [2024-04-19 03:36:54.812296] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:17.268 [2024-04-19 03:36:54.812307] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:17.268 [2024-04-19 03:36:54.812318] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:17.268 request: 00:23:17.268 { 00:23:17.268 "name": "nvme0", 00:23:17.268 "trtype": "tcp", 00:23:17.268 "traddr": "127.0.0.1", 00:23:17.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:17.268 "adrfam": "ipv4", 00:23:17.268 "trsvcid": "4420", 00:23:17.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:17.268 "psk": "key0", 00:23:17.268 "method": "bdev_nvme_attach_controller", 00:23:17.268 "req_id": 1 00:23:17.268 } 00:23:17.268 Got JSON-RPC error response 00:23:17.268 response: 00:23:17.268 { 00:23:17.268 "code": -19, 00:23:17.268 "message": "No such device" 00:23:17.268 } 00:23:17.526 03:36:54 -- common/autotest_common.sh@641 -- # es=1 00:23:17.526 03:36:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:17.526 03:36:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:17.526 03:36:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:17.526 03:36:54 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:17.526 03:36:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:17.526 03:36:55 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:17.526 03:36:55 -- keyring/common.sh@15 -- # local name key digest path 00:23:17.526 03:36:55 -- keyring/common.sh@17 -- # name=key0 00:23:17.526 03:36:55 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:17.527 03:36:55 -- keyring/common.sh@17 -- # digest=0 00:23:17.527 03:36:55 -- keyring/common.sh@18 -- # mktemp 00:23:17.527 03:36:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.S3cyMn1nxL 00:23:17.527 03:36:55 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:17.527 03:36:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:17.527 03:36:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:17.527 03:36:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:17.527 03:36:55 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:17.527 03:36:55 -- nvmf/common.sh@693 -- # digest=0 00:23:17.527 03:36:55 -- nvmf/common.sh@694 -- # python - 00:23:17.787 03:36:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S3cyMn1nxL 00:23:17.787 03:36:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.S3cyMn1nxL 00:23:17.787 03:36:55 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.S3cyMn1nxL 00:23:17.787 03:36:55 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S3cyMn1nxL 00:23:17.787 03:36:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S3cyMn1nxL 00:23:18.047 03:36:55 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:18.047 03:36:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:18.306 nvme0n1 00:23:18.306 03:36:55 -- keyring/file.sh@99 -- # get_refcnt key0 00:23:18.306 03:36:55 -- keyring/common.sh@12 -- # get_key key0 00:23:18.306 03:36:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:18.306 03:36:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:18.306 03:36:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:18.306 03:36:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:18.563 03:36:55 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:18.563 03:36:55 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:18.563 03:36:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:18.822 03:36:56 -- keyring/file.sh@101 -- # get_key key0 00:23:18.822 03:36:56 -- keyring/file.sh@101 -- # jq -r .removed 00:23:18.822 03:36:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:18.822 03:36:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:18.822 03:36:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:19.080 03:36:56 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:19.080 03:36:56 -- keyring/file.sh@102 -- # get_refcnt key0 00:23:19.080 03:36:56 -- keyring/common.sh@12 -- # get_key key0 00:23:19.080 03:36:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:19.080 03:36:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:19.080 03:36:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:19.080 03:36:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.337 03:36:56 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:19.338 03:36:56 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:19.338 03:36:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:19.338 03:36:56 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:19.338 03:36:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.338 03:36:56 -- keyring/file.sh@104 -- # jq length 00:23:19.595 03:36:57 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:19.595 03:36:57 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S3cyMn1nxL 00:23:19.595 03:36:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S3cyMn1nxL 00:23:19.853 03:36:57 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.o0RGZXAcmz 00:23:19.853 03:36:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.o0RGZXAcmz 00:23:20.111 03:36:57 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:20.111 03:36:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:20.369 nvme0n1 00:23:20.369 03:36:57 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:20.369 03:36:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:20.939 03:36:58 -- keyring/file.sh@112 -- # config='{ 00:23:20.939 "subsystems": [ 00:23:20.939 { 00:23:20.939 "subsystem": "keyring", 00:23:20.939 "config": [ 00:23:20.939 { 00:23:20.939 "method": "keyring_file_add_key", 00:23:20.939 "params": { 00:23:20.939 "name": "key0", 00:23:20.939 "path": "/tmp/tmp.S3cyMn1nxL" 00:23:20.939 } 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "method": "keyring_file_add_key", 00:23:20.939 "params": { 00:23:20.939 "name": "key1", 00:23:20.939 "path": "/tmp/tmp.o0RGZXAcmz" 00:23:20.939 } 00:23:20.939 } 00:23:20.939 ] 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "subsystem": "iobuf", 00:23:20.939 "config": [ 00:23:20.939 { 00:23:20.939 "method": "iobuf_set_options", 00:23:20.939 "params": { 00:23:20.939 "small_pool_count": 8192, 00:23:20.939 "large_pool_count": 1024, 00:23:20.939 "small_bufsize": 8192, 00:23:20.939 "large_bufsize": 135168 00:23:20.939 } 00:23:20.939 } 00:23:20.939 ] 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "subsystem": "sock", 00:23:20.939 "config": [ 00:23:20.939 { 00:23:20.939 "method": "sock_impl_set_options", 00:23:20.939 "params": { 00:23:20.939 "impl_name": "posix", 00:23:20.939 "recv_buf_size": 2097152, 00:23:20.939 "send_buf_size": 2097152, 00:23:20.939 "enable_recv_pipe": true, 00:23:20.939 "enable_quickack": false, 00:23:20.939 "enable_placement_id": 0, 00:23:20.939 "enable_zerocopy_send_server": true, 00:23:20.939 "enable_zerocopy_send_client": false, 00:23:20.939 "zerocopy_threshold": 0, 00:23:20.939 "tls_version": 0, 00:23:20.939 "enable_ktls": false 00:23:20.939 } 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "method": "sock_impl_set_options", 00:23:20.939 "params": { 00:23:20.939 "impl_name": "ssl", 00:23:20.939 "recv_buf_size": 4096, 00:23:20.939 "send_buf_size": 4096, 00:23:20.939 "enable_recv_pipe": true, 00:23:20.939 "enable_quickack": false, 00:23:20.939 "enable_placement_id": 0, 00:23:20.939 "enable_zerocopy_send_server": true, 00:23:20.939 "enable_zerocopy_send_client": false, 00:23:20.939 "zerocopy_threshold": 0, 00:23:20.939 "tls_version": 0, 00:23:20.939 "enable_ktls": false 00:23:20.939 } 00:23:20.939 } 00:23:20.939 ] 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "subsystem": "vmd", 00:23:20.939 "config": [] 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "subsystem": "accel", 00:23:20.939 "config": [ 00:23:20.939 { 00:23:20.939 "method": "accel_set_options", 00:23:20.939 "params": { 00:23:20.939 "small_cache_size": 128, 00:23:20.939 "large_cache_size": 16, 00:23:20.939 "task_count": 2048, 00:23:20.939 "sequence_count": 2048, 00:23:20.939 "buf_count": 2048 00:23:20.939 } 00:23:20.939 } 00:23:20.939 ] 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "subsystem": "bdev", 00:23:20.939 "config": [ 00:23:20.939 { 00:23:20.939 "method": "bdev_set_options", 00:23:20.939 "params": { 00:23:20.939 "bdev_io_pool_size": 65535, 00:23:20.939 "bdev_io_cache_size": 256, 00:23:20.939 "bdev_auto_examine": true, 00:23:20.939 "iobuf_small_cache_size": 128, 00:23:20.939 "iobuf_large_cache_size": 16 00:23:20.939 } 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "method": "bdev_raid_set_options", 00:23:20.939 "params": { 00:23:20.939 "process_window_size_kb": 1024 00:23:20.939 } 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "method": "bdev_iscsi_set_options", 00:23:20.939 "params": { 00:23:20.939 "timeout_sec": 30 00:23:20.939 } 00:23:20.939 }, 00:23:20.939 { 00:23:20.939 "method": "bdev_nvme_set_options", 00:23:20.939 "params": { 00:23:20.939 "action_on_timeout": "none", 00:23:20.939 "timeout_us": 0, 00:23:20.939 "timeout_admin_us": 0, 00:23:20.939 "keep_alive_timeout_ms": 10000, 00:23:20.939 "arbitration_burst": 0, 00:23:20.939 "low_priority_weight": 0, 00:23:20.939 "medium_priority_weight": 0, 00:23:20.939 "high_priority_weight": 0, 00:23:20.939 "nvme_adminq_poll_period_us": 10000, 00:23:20.939 "nvme_ioq_poll_period_us": 0, 00:23:20.939 "io_queue_requests": 512, 00:23:20.939 "delay_cmd_submit": true, 00:23:20.939 "transport_retry_count": 4, 00:23:20.939 "bdev_retry_count": 3, 00:23:20.939 "transport_ack_timeout": 0, 00:23:20.939 "ctrlr_loss_timeout_sec": 0, 00:23:20.939 "reconnect_delay_sec": 0, 00:23:20.939 "fast_io_fail_timeout_sec": 0, 00:23:20.939 "disable_auto_failback": false, 00:23:20.939 "generate_uuids": false, 00:23:20.939 "transport_tos": 0, 00:23:20.939 "nvme_error_stat": false, 00:23:20.939 "rdma_srq_size": 0, 00:23:20.939 "io_path_stat": false, 00:23:20.939 "allow_accel_sequence": false, 00:23:20.939 "rdma_max_cq_size": 0, 00:23:20.939 "rdma_cm_event_timeout_ms": 0, 00:23:20.939 "dhchap_digests": [ 00:23:20.939 "sha256", 00:23:20.939 "sha384", 00:23:20.939 "sha512" 00:23:20.939 ], 00:23:20.939 "dhchap_dhgroups": [ 00:23:20.939 "null", 00:23:20.939 "ffdhe2048", 00:23:20.940 "ffdhe3072", 00:23:20.940 "ffdhe4096", 00:23:20.940 "ffdhe6144", 00:23:20.940 "ffdhe8192" 00:23:20.940 ] 00:23:20.940 } 00:23:20.940 }, 00:23:20.940 { 00:23:20.940 "method": "bdev_nvme_attach_controller", 00:23:20.940 "params": { 00:23:20.940 "name": "nvme0", 00:23:20.940 "trtype": "TCP", 00:23:20.940 "adrfam": "IPv4", 00:23:20.940 "traddr": "127.0.0.1", 00:23:20.940 "trsvcid": "4420", 00:23:20.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.940 "prchk_reftag": false, 00:23:20.940 "prchk_guard": false, 00:23:20.940 "ctrlr_loss_timeout_sec": 0, 00:23:20.940 "reconnect_delay_sec": 0, 00:23:20.940 "fast_io_fail_timeout_sec": 0, 00:23:20.940 "psk": "key0", 00:23:20.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:20.940 "hdgst": false, 00:23:20.940 "ddgst": false 00:23:20.940 } 00:23:20.940 }, 00:23:20.940 { 00:23:20.940 "method": "bdev_nvme_set_hotplug", 00:23:20.940 "params": { 00:23:20.940 "period_us": 100000, 00:23:20.940 "enable": false 00:23:20.940 } 00:23:20.940 }, 00:23:20.940 { 00:23:20.940 "method": "bdev_wait_for_examine" 00:23:20.940 } 00:23:20.940 ] 00:23:20.940 }, 00:23:20.940 { 00:23:20.940 "subsystem": "nbd", 00:23:20.940 "config": [] 00:23:20.940 } 00:23:20.940 ] 00:23:20.940 }' 00:23:20.940 03:36:58 -- keyring/file.sh@114 -- # killprocess 356019 00:23:20.940 03:36:58 -- common/autotest_common.sh@936 -- # '[' -z 356019 ']' 00:23:20.940 03:36:58 -- common/autotest_common.sh@940 -- # kill -0 356019 00:23:20.940 03:36:58 -- common/autotest_common.sh@941 -- # uname 00:23:20.940 03:36:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.940 03:36:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 356019 00:23:20.940 03:36:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:20.940 03:36:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:20.940 03:36:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 356019' 00:23:20.940 killing process with pid 356019 00:23:20.940 03:36:58 -- common/autotest_common.sh@955 -- # kill 356019 00:23:20.940 Received shutdown signal, test time was about 1.000000 seconds 00:23:20.940 00:23:20.940 Latency(us) 00:23:20.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.940 =================================================================================================================== 00:23:20.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.940 03:36:58 -- common/autotest_common.sh@960 -- # wait 356019 00:23:21.199 03:36:58 -- keyring/file.sh@117 -- # bperfpid=357360 00:23:21.199 03:36:58 -- keyring/file.sh@119 -- # waitforlisten 357360 /var/tmp/bperf.sock 00:23:21.199 03:36:58 -- common/autotest_common.sh@817 -- # '[' -z 357360 ']' 00:23:21.199 03:36:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:21.199 03:36:58 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:21.199 03:36:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:21.199 03:36:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:21.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:21.199 03:36:58 -- keyring/file.sh@115 -- # echo '{ 00:23:21.199 "subsystems": [ 00:23:21.199 { 00:23:21.199 "subsystem": "keyring", 00:23:21.199 "config": [ 00:23:21.199 { 00:23:21.199 "method": "keyring_file_add_key", 00:23:21.199 "params": { 00:23:21.199 "name": "key0", 00:23:21.199 "path": "/tmp/tmp.S3cyMn1nxL" 00:23:21.199 } 00:23:21.199 }, 00:23:21.199 { 00:23:21.199 "method": "keyring_file_add_key", 00:23:21.199 "params": { 00:23:21.199 "name": "key1", 00:23:21.199 "path": "/tmp/tmp.o0RGZXAcmz" 00:23:21.199 } 00:23:21.199 } 00:23:21.199 ] 00:23:21.199 }, 00:23:21.199 { 00:23:21.199 "subsystem": "iobuf", 00:23:21.199 "config": [ 00:23:21.199 { 00:23:21.199 "method": "iobuf_set_options", 00:23:21.199 "params": { 00:23:21.199 "small_pool_count": 8192, 00:23:21.199 "large_pool_count": 1024, 00:23:21.200 "small_bufsize": 8192, 00:23:21.200 "large_bufsize": 135168 00:23:21.200 } 00:23:21.200 } 00:23:21.200 ] 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "subsystem": "sock", 00:23:21.200 "config": [ 00:23:21.200 { 00:23:21.200 "method": "sock_impl_set_options", 00:23:21.200 "params": { 00:23:21.200 "impl_name": "posix", 00:23:21.200 "recv_buf_size": 2097152, 00:23:21.200 "send_buf_size": 2097152, 00:23:21.200 "enable_recv_pipe": true, 00:23:21.200 "enable_quickack": false, 00:23:21.200 "enable_placement_id": 0, 00:23:21.200 "enable_zerocopy_send_server": true, 00:23:21.200 "enable_zerocopy_send_client": false, 00:23:21.200 "zerocopy_threshold": 0, 00:23:21.200 "tls_version": 0, 00:23:21.200 "enable_ktls": false 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "sock_impl_set_options", 00:23:21.200 "params": { 00:23:21.200 "impl_name": "ssl", 00:23:21.200 "recv_buf_size": 4096, 00:23:21.200 "send_buf_size": 4096, 00:23:21.200 "enable_recv_pipe": true, 00:23:21.200 "enable_quickack": false, 00:23:21.200 "enable_placement_id": 0, 00:23:21.200 "enable_zerocopy_send_server": true, 00:23:21.200 "enable_zerocopy_send_client": false, 00:23:21.200 "zerocopy_threshold": 0, 00:23:21.200 "tls_version": 0, 00:23:21.200 "enable_ktls": false 00:23:21.200 } 00:23:21.200 } 00:23:21.200 ] 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "subsystem": "vmd", 00:23:21.200 "config": [] 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "subsystem": "accel", 00:23:21.200 "config": [ 00:23:21.200 { 00:23:21.200 "method": "accel_set_options", 00:23:21.200 "params": { 00:23:21.200 "small_cache_size": 128, 00:23:21.200 "large_cache_size": 16, 00:23:21.200 "task_count": 2048, 00:23:21.200 "sequence_count": 2048, 00:23:21.200 "buf_count": 2048 00:23:21.200 } 00:23:21.200 } 00:23:21.200 ] 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "subsystem": "bdev", 00:23:21.200 "config": [ 00:23:21.200 { 00:23:21.200 "method": "bdev_set_options", 00:23:21.200 "params": { 00:23:21.200 "bdev_io_pool_size": 65535, 00:23:21.200 "bdev_io_cache_size": 256, 00:23:21.200 "bdev_auto_examine": true, 00:23:21.200 "iobuf_small_cache_size": 128, 00:23:21.200 "iobuf_large_cache_size": 16 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "bdev_raid_set_options", 00:23:21.200 "params": { 00:23:21.200 "process_window_size_kb": 1024 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "bdev_iscsi_set_options", 00:23:21.200 "params": { 00:23:21.200 "timeout_sec": 30 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "bdev_nvme_set_options", 00:23:21.200 "params": { 00:23:21.200 "action_on_timeout": "none", 00:23:21.200 "timeout_us": 0, 00:23:21.200 "timeout_admin_us": 0, 00:23:21.200 "keep_alive_timeout_ms": 10000, 00:23:21.200 "arbitration_burst": 0, 00:23:21.200 "low_priority_weight": 0, 00:23:21.200 "medium_priority_weight": 0, 00:23:21.200 "high_priority_weight": 0, 00:23:21.200 "nvme_adminq_poll_period_us": 10000, 00:23:21.200 "nvme_ioq_poll_period_us": 0, 00:23:21.200 "io_queue_requests": 512, 00:23:21.200 "delay_cmd_submit": true, 00:23:21.200 "transport_retry_count": 4, 00:23:21.200 "bdev_retry_count": 3, 00:23:21.200 "transport_ack_timeout": 0, 00:23:21.200 "ctrlr_loss_timeout_sec": 0, 00:23:21.200 "reconnect_delay_sec": 0, 00:23:21.200 "fast_io_fail_timeout_sec": 0, 00:23:21.200 "disable_auto_failback": false, 00:23:21.200 "generate_uuids": false, 00:23:21.200 "transport_tos": 0, 00:23:21.200 "nvme_error_stat": false, 00:23:21.200 "rdma_srq_size": 0, 00:23:21.200 "io_path_stat": false, 00:23:21.200 "allow_accel_sequence": false, 00:23:21.200 "rdma_max_cq_size": 0, 00:23:21.200 "rdma_cm_event_timeout_ms": 0, 00:23:21.200 "dhchap_digests": [ 00:23:21.200 "sha256", 00:23:21.200 "sha384", 00:23:21.200 "sha512" 00:23:21.200 ], 00:23:21.200 "dhchap_dhgroups": [ 00:23:21.200 "null", 00:23:21.200 "ffdhe2048", 00:23:21.200 "ffdhe3072", 00:23:21.200 "ffdhe4096", 00:23:21.200 "ffdhe6144", 00:23:21.200 "ffdhe8192" 00:23:21.200 ] 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "bdev_nvme_attach_controller", 00:23:21.200 "params": { 00:23:21.200 "name": "nvme0", 00:23:21.200 "trtype": "TCP", 00:23:21.200 "adrfam": "IPv4", 00:23:21.200 "traddr": "127.0.0.1", 00:23:21.200 "trsvcid": "4420", 00:23:21.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.200 "prchk_reftag": false, 00:23:21.200 "prchk_guard": false, 00:23:21.200 "ctrlr_loss_timeout_sec": 0, 00:23:21.200 "reconnect_delay_sec": 0, 00:23:21.200 "fast_io_fail_timeout_sec": 0, 00:23:21.200 "psk": "key0", 00:23:21.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:21.200 "hdgst": false, 00:23:21.200 "ddgst": false 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "bdev_nvme_set_hotplug", 00:23:21.200 "params": { 00:23:21.200 "period_us": 100000, 00:23:21.200 "enable": false 00:23:21.200 } 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "method": "bdev_wait_for_examine" 00:23:21.200 } 00:23:21.200 ] 00:23:21.200 }, 00:23:21.200 { 00:23:21.200 "subsystem": "nbd", 00:23:21.200 "config": [] 00:23:21.200 } 00:23:21.200 ] 00:23:21.200 }' 00:23:21.200 03:36:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:21.200 03:36:58 -- common/autotest_common.sh@10 -- # set +x 00:23:21.200 [2024-04-19 03:36:58.549586] Starting SPDK v24.05-pre git sha1 c064dc584 / DPDK 23.11.0 initialization... 00:23:21.200 [2024-04-19 03:36:58.549664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357360 ] 00:23:21.200 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.200 [2024-04-19 03:36:58.611809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.200 [2024-04-19 03:36:58.725824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.460 [2024-04-19 03:36:58.908604] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.026 03:36:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:22.026 03:36:59 -- common/autotest_common.sh@850 -- # return 0 00:23:22.026 03:36:59 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:22.026 03:36:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:22.026 03:36:59 -- keyring/file.sh@120 -- # jq length 00:23:22.283 03:36:59 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:22.283 03:36:59 -- keyring/file.sh@121 -- # get_refcnt key0 00:23:22.283 03:36:59 -- keyring/common.sh@12 -- # get_key key0 00:23:22.283 03:36:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:22.283 03:36:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:22.283 03:36:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:22.283 03:36:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:22.540 03:36:59 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:22.540 03:36:59 -- keyring/file.sh@122 -- # get_refcnt key1 00:23:22.540 03:36:59 -- keyring/common.sh@12 -- # get_key key1 00:23:22.540 03:36:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:22.540 03:36:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:22.540 03:36:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:22.540 03:36:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:22.798 03:37:00 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:22.798 03:37:00 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:22.798 03:37:00 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:22.798 03:37:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:23.060 03:37:00 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:23.060 03:37:00 -- keyring/file.sh@1 -- # cleanup 00:23:23.060 03:37:00 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S3cyMn1nxL /tmp/tmp.o0RGZXAcmz 00:23:23.060 03:37:00 -- keyring/file.sh@20 -- # killprocess 357360 00:23:23.060 03:37:00 -- common/autotest_common.sh@936 -- # '[' -z 357360 ']' 00:23:23.060 03:37:00 -- common/autotest_common.sh@940 -- # kill -0 357360 00:23:23.060 03:37:00 -- common/autotest_common.sh@941 -- # uname 00:23:23.060 03:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.060 03:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 357360 00:23:23.060 03:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:23.060 03:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:23.060 03:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 357360' 00:23:23.060 killing process with pid 357360 00:23:23.060 03:37:00 -- common/autotest_common.sh@955 -- # kill 357360 00:23:23.060 Received shutdown signal, test time was about 1.000000 seconds 00:23:23.060 00:23:23.060 Latency(us) 00:23:23.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.060 =================================================================================================================== 00:23:23.061 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.061 03:37:00 -- common/autotest_common.sh@960 -- # wait 357360 00:23:23.318 03:37:00 -- keyring/file.sh@21 -- # killprocess 356009 00:23:23.318 03:37:00 -- common/autotest_common.sh@936 -- # '[' -z 356009 ']' 00:23:23.318 03:37:00 -- common/autotest_common.sh@940 -- # kill -0 356009 00:23:23.318 03:37:00 -- common/autotest_common.sh@941 -- # uname 00:23:23.319 03:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.319 03:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 356009 00:23:23.319 03:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:23.319 03:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:23.319 03:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 356009' 00:23:23.319 killing process with pid 356009 00:23:23.319 03:37:00 -- common/autotest_common.sh@955 -- # kill 356009 00:23:23.319 [2024-04-19 03:37:00.797814] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:23.319 03:37:00 -- common/autotest_common.sh@960 -- # wait 356009 00:23:23.887 00:23:23.887 real 0m13.997s 00:23:23.887 user 0m34.164s 00:23:23.887 sys 0m3.392s 00:23:23.887 03:37:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:23.887 03:37:01 -- common/autotest_common.sh@10 -- # set +x 00:23:23.887 ************************************ 00:23:23.887 END TEST keyring_file 00:23:23.887 ************************************ 00:23:23.887 03:37:01 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:23:23.887 03:37:01 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:23:23.887 03:37:01 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:23.887 03:37:01 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:23.887 03:37:01 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:23:23.887 03:37:01 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:23:23.887 03:37:01 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:23:23.887 03:37:01 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:23:23.887 03:37:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:23.887 03:37:01 -- common/autotest_common.sh@10 -- # set +x 00:23:23.887 03:37:01 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:23:23.887 03:37:01 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:23:23.887 03:37:01 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:23:23.887 03:37:01 -- common/autotest_common.sh@10 -- # set +x 00:23:25.794 INFO: APP EXITING 00:23:25.794 INFO: killing all VMs 00:23:25.794 INFO: killing vhost app 00:23:25.794 WARN: no vhost pid file found 00:23:25.794 INFO: EXIT DONE 00:23:26.732 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:23:26.732 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:23:26.732 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:23:26.732 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:23:26.732 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:23:26.732 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:23:26.732 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:23:26.732 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:23:26.732 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:23:26.732 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:23:26.732 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:23:26.732 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:23:26.732 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:23:26.732 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:23:26.732 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:23:26.732 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:23:26.732 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:23:28.152 Cleaning 00:23:28.152 Removing: /var/run/dpdk/spdk0/config 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:28.152 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:28.152 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:28.152 Removing: /var/run/dpdk/spdk1/config 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:28.152 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:28.152 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:28.152 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:28.152 Removing: /var/run/dpdk/spdk2/config 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:28.152 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:28.152 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:28.152 Removing: /var/run/dpdk/spdk3/config 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:28.152 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:28.152 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:28.152 Removing: /var/run/dpdk/spdk4/config 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:28.152 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:28.152 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:28.152 Removing: /dev/shm/bdev_svc_trace.1 00:23:28.152 Removing: /dev/shm/nvmf_trace.0 00:23:28.152 Removing: /dev/shm/spdk_tgt_trace.pid129136 00:23:28.152 Removing: /var/run/dpdk/spdk0 00:23:28.152 Removing: /var/run/dpdk/spdk1 00:23:28.152 Removing: /var/run/dpdk/spdk2 00:23:28.152 Removing: /var/run/dpdk/spdk3 00:23:28.152 Removing: /var/run/dpdk/spdk4 00:23:28.152 Removing: /var/run/dpdk/spdk_pid127411 00:23:28.152 Removing: /var/run/dpdk/spdk_pid128279 00:23:28.152 Removing: /var/run/dpdk/spdk_pid129136 00:23:28.152 Removing: /var/run/dpdk/spdk_pid129734 00:23:28.152 Removing: /var/run/dpdk/spdk_pid130362 00:23:28.152 Removing: /var/run/dpdk/spdk_pid130476 00:23:28.152 Removing: /var/run/dpdk/spdk_pid131310 00:23:28.152 Removing: /var/run/dpdk/spdk_pid131315 00:23:28.152 Removing: /var/run/dpdk/spdk_pid131577 00:23:28.152 Removing: /var/run/dpdk/spdk_pid132770 00:23:28.152 Removing: /var/run/dpdk/spdk_pid133821 00:23:28.152 Removing: /var/run/dpdk/spdk_pid134060 00:23:28.152 Removing: /var/run/dpdk/spdk_pid134333 00:23:28.152 Removing: /var/run/dpdk/spdk_pid134547 00:23:28.152 Removing: /var/run/dpdk/spdk_pid134754 00:23:28.152 Removing: /var/run/dpdk/spdk_pid134989 00:23:28.152 Removing: /var/run/dpdk/spdk_pid135211 00:23:28.152 Removing: /var/run/dpdk/spdk_pid135400 00:23:28.152 Removing: /var/run/dpdk/spdk_pid135859 00:23:28.152 Removing: /var/run/dpdk/spdk_pid138320 00:23:28.152 Removing: /var/run/dpdk/spdk_pid138639 00:23:28.152 Removing: /var/run/dpdk/spdk_pid138815 00:23:28.152 Removing: /var/run/dpdk/spdk_pid138934 00:23:28.152 Removing: /var/run/dpdk/spdk_pid139612 00:23:28.152 Removing: /var/run/dpdk/spdk_pid139792 00:23:28.152 Removing: /var/run/dpdk/spdk_pid140201 00:23:28.152 Removing: /var/run/dpdk/spdk_pid140337 00:23:28.152 Removing: /var/run/dpdk/spdk_pid140636 00:23:28.152 Removing: /var/run/dpdk/spdk_pid140647 00:23:28.152 Removing: /var/run/dpdk/spdk_pid140893 00:23:28.152 Removing: /var/run/dpdk/spdk_pid140950 00:23:28.152 Removing: /var/run/dpdk/spdk_pid141332 00:23:28.152 Removing: /var/run/dpdk/spdk_pid141615 00:23:28.152 Removing: /var/run/dpdk/spdk_pid141821 00:23:28.152 Removing: /var/run/dpdk/spdk_pid142040 00:23:28.152 Removing: /var/run/dpdk/spdk_pid142166 00:23:28.152 Removing: /var/run/dpdk/spdk_pid142369 00:23:28.152 Removing: /var/run/dpdk/spdk_pid142540 00:23:28.152 Removing: /var/run/dpdk/spdk_pid142789 00:23:28.152 Removing: /var/run/dpdk/spdk_pid142990 00:23:28.152 Removing: /var/run/dpdk/spdk_pid143148 00:23:28.152 Removing: /var/run/dpdk/spdk_pid143432 00:23:28.152 Removing: /var/run/dpdk/spdk_pid143602 00:23:28.152 Removing: /var/run/dpdk/spdk_pid143813 00:23:28.152 Removing: /var/run/dpdk/spdk_pid144046 00:23:28.152 Removing: /var/run/dpdk/spdk_pid144210 00:23:28.152 Removing: /var/run/dpdk/spdk_pid144490 00:23:28.152 Removing: /var/run/dpdk/spdk_pid144663 00:23:28.152 Removing: /var/run/dpdk/spdk_pid144938 00:23:28.152 Removing: /var/run/dpdk/spdk_pid145108 00:23:28.152 Removing: /var/run/dpdk/spdk_pid145272 00:23:28.152 Removing: /var/run/dpdk/spdk_pid145556 00:23:28.152 Removing: /var/run/dpdk/spdk_pid145718 00:23:28.411 Removing: /var/run/dpdk/spdk_pid146003 00:23:28.411 Removing: /var/run/dpdk/spdk_pid146178 00:23:28.411 Removing: /var/run/dpdk/spdk_pid146350 00:23:28.411 Removing: /var/run/dpdk/spdk_pid146625 00:23:28.411 Removing: /var/run/dpdk/spdk_pid146820 00:23:28.411 Removing: /var/run/dpdk/spdk_pid147100 00:23:28.411 Removing: /var/run/dpdk/spdk_pid149239 00:23:28.411 Removing: /var/run/dpdk/spdk_pid175502 00:23:28.411 Removing: /var/run/dpdk/spdk_pid178622 00:23:28.411 Removing: /var/run/dpdk/spdk_pid184396 00:23:28.411 Removing: /var/run/dpdk/spdk_pid187693 00:23:28.411 Removing: /var/run/dpdk/spdk_pid190066 00:23:28.411 Removing: /var/run/dpdk/spdk_pid190584 00:23:28.411 Removing: /var/run/dpdk/spdk_pid197866 00:23:28.411 Removing: /var/run/dpdk/spdk_pid197945 00:23:28.411 Removing: /var/run/dpdk/spdk_pid198525 00:23:28.411 Removing: /var/run/dpdk/spdk_pid199179 00:23:28.411 Removing: /var/run/dpdk/spdk_pid199728 00:23:28.411 Removing: /var/run/dpdk/spdk_pid200133 00:23:28.411 Removing: /var/run/dpdk/spdk_pid200245 00:23:28.411 Removing: /var/run/dpdk/spdk_pid200386 00:23:28.411 Removing: /var/run/dpdk/spdk_pid200512 00:23:28.411 Removing: /var/run/dpdk/spdk_pid200524 00:23:28.411 Removing: /var/run/dpdk/spdk_pid201181 00:23:28.411 Removing: /var/run/dpdk/spdk_pid201722 00:23:28.411 Removing: /var/run/dpdk/spdk_pid202377 00:23:28.411 Removing: /var/run/dpdk/spdk_pid202781 00:23:28.411 Removing: /var/run/dpdk/spdk_pid202783 00:23:28.411 Removing: /var/run/dpdk/spdk_pid203047 00:23:28.411 Removing: /var/run/dpdk/spdk_pid204079 00:23:28.411 Removing: /var/run/dpdk/spdk_pid204810 00:23:28.411 Removing: /var/run/dpdk/spdk_pid211063 00:23:28.411 Removing: /var/run/dpdk/spdk_pid211341 00:23:28.411 Removing: /var/run/dpdk/spdk_pid213869 00:23:28.411 Removing: /var/run/dpdk/spdk_pid217581 00:23:28.411 Removing: /var/run/dpdk/spdk_pid219636 00:23:28.411 Removing: /var/run/dpdk/spdk_pid226026 00:23:28.411 Removing: /var/run/dpdk/spdk_pid231262 00:23:28.411 Removing: /var/run/dpdk/spdk_pid232565 00:23:28.411 Removing: /var/run/dpdk/spdk_pid233236 00:23:28.411 Removing: /var/run/dpdk/spdk_pid243575 00:23:28.411 Removing: /var/run/dpdk/spdk_pid246201 00:23:28.411 Removing: /var/run/dpdk/spdk_pid249097 00:23:28.411 Removing: /var/run/dpdk/spdk_pid250279 00:23:28.411 Removing: /var/run/dpdk/spdk_pid251482 00:23:28.411 Removing: /var/run/dpdk/spdk_pid251620 00:23:28.411 Removing: /var/run/dpdk/spdk_pid251754 00:23:28.411 Removing: /var/run/dpdk/spdk_pid251890 00:23:28.411 Removing: /var/run/dpdk/spdk_pid252338 00:23:28.411 Removing: /var/run/dpdk/spdk_pid253654 00:23:28.411 Removing: /var/run/dpdk/spdk_pid254388 00:23:28.411 Removing: /var/run/dpdk/spdk_pid254820 00:23:28.411 Removing: /var/run/dpdk/spdk_pid256436 00:23:28.411 Removing: /var/run/dpdk/spdk_pid256874 00:23:28.411 Removing: /var/run/dpdk/spdk_pid257435 00:23:28.411 Removing: /var/run/dpdk/spdk_pid259965 00:23:28.411 Removing: /var/run/dpdk/spdk_pid265742 00:23:28.411 Removing: /var/run/dpdk/spdk_pid268391 00:23:28.411 Removing: /var/run/dpdk/spdk_pid272176 00:23:28.411 Removing: /var/run/dpdk/spdk_pid273258 00:23:28.411 Removing: /var/run/dpdk/spdk_pid274466 00:23:28.411 Removing: /var/run/dpdk/spdk_pid277525 00:23:28.411 Removing: /var/run/dpdk/spdk_pid279883 00:23:28.411 Removing: /var/run/dpdk/spdk_pid284115 00:23:28.411 Removing: /var/run/dpdk/spdk_pid284126 00:23:28.411 Removing: /var/run/dpdk/spdk_pid287033 00:23:28.411 Removing: /var/run/dpdk/spdk_pid287163 00:23:28.411 Removing: /var/run/dpdk/spdk_pid287299 00:23:28.411 Removing: /var/run/dpdk/spdk_pid287577 00:23:28.411 Removing: /var/run/dpdk/spdk_pid287699 00:23:28.411 Removing: /var/run/dpdk/spdk_pid290210 00:23:28.411 Removing: /var/run/dpdk/spdk_pid290540 00:23:28.411 Removing: /var/run/dpdk/spdk_pid293205 00:23:28.411 Removing: /var/run/dpdk/spdk_pid295183 00:23:28.411 Removing: /var/run/dpdk/spdk_pid298496 00:23:28.411 Removing: /var/run/dpdk/spdk_pid302079 00:23:28.411 Removing: /var/run/dpdk/spdk_pid306522 00:23:28.411 Removing: /var/run/dpdk/spdk_pid306563 00:23:28.411 Removing: /var/run/dpdk/spdk_pid318862 00:23:28.411 Removing: /var/run/dpdk/spdk_pid319266 00:23:28.411 Removing: /var/run/dpdk/spdk_pid319791 00:23:28.411 Removing: /var/run/dpdk/spdk_pid320304 00:23:28.411 Removing: /var/run/dpdk/spdk_pid320914 00:23:28.411 Removing: /var/run/dpdk/spdk_pid321345 00:23:28.411 Removing: /var/run/dpdk/spdk_pid321870 00:23:28.411 Removing: /var/run/dpdk/spdk_pid322276 00:23:28.411 Removing: /var/run/dpdk/spdk_pid324925 00:23:28.411 Removing: /var/run/dpdk/spdk_pid325181 00:23:28.411 Removing: /var/run/dpdk/spdk_pid328994 00:23:28.411 Removing: /var/run/dpdk/spdk_pid329053 00:23:28.411 Removing: /var/run/dpdk/spdk_pid330778 00:23:28.411 Removing: /var/run/dpdk/spdk_pid335714 00:23:28.411 Removing: /var/run/dpdk/spdk_pid335728 00:23:28.411 Removing: /var/run/dpdk/spdk_pid338632 00:23:28.411 Removing: /var/run/dpdk/spdk_pid340039 00:23:28.411 Removing: /var/run/dpdk/spdk_pid341519 00:23:28.411 Removing: /var/run/dpdk/spdk_pid342316 00:23:28.411 Removing: /var/run/dpdk/spdk_pid343728 00:23:28.411 Removing: /var/run/dpdk/spdk_pid344605 00:23:28.411 Removing: /var/run/dpdk/spdk_pid350635 00:23:28.411 Removing: /var/run/dpdk/spdk_pid350923 00:23:28.411 Removing: /var/run/dpdk/spdk_pid351314 00:23:28.411 Removing: /var/run/dpdk/spdk_pid352883 00:23:28.411 Removing: /var/run/dpdk/spdk_pid353283 00:23:28.411 Removing: /var/run/dpdk/spdk_pid353561 00:23:28.411 Removing: /var/run/dpdk/spdk_pid356009 00:23:28.411 Removing: /var/run/dpdk/spdk_pid356019 00:23:28.411 Removing: /var/run/dpdk/spdk_pid357360 00:23:28.411 Clean 00:23:28.669 03:37:06 -- common/autotest_common.sh@1437 -- # return 0 00:23:28.669 03:37:06 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:23:28.669 03:37:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:28.669 03:37:06 -- common/autotest_common.sh@10 -- # set +x 00:23:28.669 03:37:06 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:23:28.669 03:37:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:28.669 03:37:06 -- common/autotest_common.sh@10 -- # set +x 00:23:28.669 03:37:06 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:23:28.669 03:37:06 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:23:28.669 03:37:06 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:23:28.669 03:37:06 -- spdk/autotest.sh@389 -- # hash lcov 00:23:28.669 03:37:06 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:28.669 03:37:06 -- spdk/autotest.sh@391 -- # hostname 00:23:28.669 03:37:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:23:28.927 geninfo: WARNING: invalid characters removed from testname! 00:24:01.015 03:37:35 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:02.399 03:37:39 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:05.701 03:37:42 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:08.243 03:37:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:10.786 03:37:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:14.085 03:37:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:16.656 03:37:54 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:16.656 03:37:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.656 03:37:54 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:16.656 03:37:54 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.656 03:37:54 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.656 03:37:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.656 03:37:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.656 03:37:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.656 03:37:54 -- paths/export.sh@5 -- $ export PATH 00:24:16.656 03:37:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.656 03:37:54 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:24:16.656 03:37:54 -- common/autobuild_common.sh@435 -- $ date +%s 00:24:16.656 03:37:54 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713490674.XXXXXX 00:24:16.656 03:37:54 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713490674.mz5V8b 00:24:16.656 03:37:54 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:24:16.656 03:37:54 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:24:16.656 03:37:54 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:24:16.656 03:37:54 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:24:16.656 03:37:54 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:24:16.656 03:37:54 -- common/autobuild_common.sh@451 -- $ get_config_params 00:24:16.656 03:37:54 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:24:16.656 03:37:54 -- common/autotest_common.sh@10 -- $ set +x 00:24:16.656 03:37:54 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:24:16.656 03:37:54 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:24:16.656 03:37:54 -- pm/common@17 -- $ local monitor 00:24:16.656 03:37:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.656 03:37:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=366035 00:24:16.656 03:37:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.656 03:37:54 -- pm/common@21 -- $ date +%s 00:24:16.656 03:37:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=366037 00:24:16.656 03:37:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.656 03:37:54 -- pm/common@21 -- $ date +%s 00:24:16.656 03:37:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=366040 00:24:16.656 03:37:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.656 03:37:54 -- pm/common@21 -- $ date +%s 00:24:16.656 03:37:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=366043 00:24:16.656 03:37:54 -- pm/common@26 -- $ sleep 1 00:24:16.656 03:37:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713490674 00:24:16.656 03:37:54 -- pm/common@21 -- $ date +%s 00:24:16.656 03:37:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713490674 00:24:16.656 03:37:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713490674 00:24:16.656 03:37:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713490674 00:24:16.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713490674_collect-vmstat.pm.log 00:24:16.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713490674_collect-bmc-pm.bmc.pm.log 00:24:16.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713490674_collect-cpu-load.pm.log 00:24:16.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713490674_collect-cpu-temp.pm.log 00:24:17.594 03:37:55 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:24:17.594 03:37:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:24:17.594 03:37:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:17.594 03:37:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:17.594 03:37:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:17.594 03:37:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:17.594 03:37:55 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:17.594 03:37:55 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:17.594 03:37:55 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:17.854 03:37:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:17.854 03:37:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:17.854 03:37:55 -- pm/common@30 -- $ signal_monitor_resources TERM 00:24:17.854 03:37:55 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:24:17.854 03:37:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:17.854 03:37:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:24:17.854 03:37:55 -- pm/common@45 -- $ pid=366059 00:24:17.854 03:37:55 -- pm/common@52 -- $ sudo kill -TERM 366059 00:24:17.854 03:37:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:17.854 03:37:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:24:17.854 03:37:55 -- pm/common@45 -- $ pid=366058 00:24:17.854 03:37:55 -- pm/common@52 -- $ sudo kill -TERM 366058 00:24:17.854 03:37:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:17.854 03:37:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:24:17.854 03:37:55 -- pm/common@45 -- $ pid=366061 00:24:17.854 03:37:55 -- pm/common@52 -- $ sudo kill -TERM 366061 00:24:17.854 03:37:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:17.854 03:37:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:24:17.854 03:37:55 -- pm/common@45 -- $ pid=366060 00:24:17.854 03:37:55 -- pm/common@52 -- $ sudo kill -TERM 366060 00:24:17.854 + [[ -n 44623 ]] 00:24:17.854 + sudo kill 44623 00:24:17.865 [Pipeline] } 00:24:17.883 [Pipeline] // stage 00:24:17.888 [Pipeline] } 00:24:17.905 [Pipeline] // timeout 00:24:17.910 [Pipeline] } 00:24:17.927 [Pipeline] // catchError 00:24:17.932 [Pipeline] } 00:24:17.949 [Pipeline] // wrap 00:24:17.954 [Pipeline] } 00:24:17.967 [Pipeline] // catchError 00:24:17.974 [Pipeline] stage 00:24:17.976 [Pipeline] { (Epilogue) 00:24:17.989 [Pipeline] catchError 00:24:17.990 [Pipeline] { 00:24:18.000 [Pipeline] echo 00:24:18.001 Cleanup processes 00:24:18.004 [Pipeline] sh 00:24:18.288 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:18.288 366187 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:24:18.288 366323 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:18.302 [Pipeline] sh 00:24:18.585 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:18.585 ++ grep -v 'sudo pgrep' 00:24:18.585 ++ awk '{print $1}' 00:24:18.585 + sudo kill -9 366187 00:24:18.597 [Pipeline] sh 00:24:18.882 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:27.030 [Pipeline] sh 00:24:27.309 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:27.309 Artifacts sizes are good 00:24:27.323 [Pipeline] archiveArtifacts 00:24:27.330 Archiving artifacts 00:24:27.530 [Pipeline] sh 00:24:27.813 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:27.828 [Pipeline] cleanWs 00:24:27.837 [WS-CLEANUP] Deleting project workspace... 00:24:27.837 [WS-CLEANUP] Deferred wipeout is used... 00:24:27.844 [WS-CLEANUP] done 00:24:27.846 [Pipeline] } 00:24:27.867 [Pipeline] // catchError 00:24:27.877 [Pipeline] sh 00:24:28.157 + logger -p user.info -t JENKINS-CI 00:24:28.166 [Pipeline] } 00:24:28.183 [Pipeline] // stage 00:24:28.189 [Pipeline] } 00:24:28.207 [Pipeline] // node 00:24:28.212 [Pipeline] End of Pipeline 00:24:28.267 Finished: SUCCESS